simulation_evaluation¶
The evaluation ROS package provides functionality to evaluate drives automatically.
├── launch
│ ├── test
│ │ ├── drive.test
│ │ ├── sign_evaluation_node.test
│ │ ├── speaker_node.test
│ │ └── state_machine_node.test
│ ├── drive_test.launch
│ ├── evaluation.launch
│ ├── evaluation_test.launch
│ ├── referee_node.launch
│ ├── sign_evaluation_node.launch
│ ├── speaker_node.launch
│ └── state_machine_node.launch
├── msg
│ ├── Broadcast.msg
│ ├── Referee.msg
│ ├── SignEvaluation.msg
│ ├── Speaker.msg
│ ├── State.msg
│ ├── TrafficSign.msg
│ └── TrafficSigns.msg
├── param
│ ├── drive
│ │ ├── paths
│ │ ├── default.yaml
│ │ └── topics.yaml
│ ├── referee
│ │ ├── default.yaml
│ │ └── topics.yaml
│ ├── sign_evaluation
│ │ ├── default.yaml
│ │ └── topics.yaml
│ ├── speaker
│ │ ├── default.yaml
│ │ └── topics.yaml
│ └── state_machine
│ └── topics.yaml
├── scripts
│ ├── drive_test_node
│ ├── evaluation_test_node
│ ├── referee_node
│ ├── sign_evaluation_node
│ ├── speaker_node
│ └── state_machine_node
├── src
│ ├── drive_test
│ │ ├── __init__.py
│ │ └── node.py
│ ├── evaluation_test
│ │ ├── __init__.py
│ │ └── node.py
│ ├── referee
│ │ ├── __init__.py
│ │ ├── node.py
│ │ └── referee.py
│ ├── sign_evaluation
│ │ ├── __init__.py
│ │ ├── label_conversion.yaml
│ │ ├── node.py
│ │ └── plots.py
│ ├── speaker
│ │ ├── speakers
│ │ ├── __init__.py
│ │ └── node.py
│ ├── state_machine
│ │ ├── docs
│ │ ├── state_machines
│ │ ├── states
│ │ ├── test
│ │ ├── __init__.py
│ │ └── node.py
│ └── __init__.py
├── test
│ ├── drive_test
│ ├── test_sign_evaluation_node
│ ├── test_speaker_node
│ └── test_state_machine_node
├── CMakeLists.txt
├── __init__.py
├── package.xml
└── setup.py
Packages and Modules
- simulation.src.simulation_evaluation.src.speaker package
- simulation.src.simulation_evaluation.src.state_machine package
- simulation.src.simulation_evaluation.src.referee package
- simulation.src.simulation_evaluation.src.evaluation_test package
- simulation.src.simulation_evaluation.src.drive_test package
- simulation.src.simulation_evaluation.src.sign_evaluation package
The simulation can be used to automatically detect errors in the car’s driving behavior. The simulation can even calculate a Carolo Cup score. The goal is to evaluate the car’s behavior in the same way a real referee would do.
Note
Currently, the simulation cannot imitate manual interventions. If the car makes a mistake, the drive is considered a failure.
The evaluation pipeline consists of three main components: The Speaker Node, the State Machine Node, and the Referee Node; there’s more about these nodes in the following.
![digraph EvaluationPipeline {
node [style=dotted, shape=box]; groundtruth_services; car_state_topic;
node [style=solid, shape=ellipse]; speaker_node;
node [shape=box]; speaker_topics; broadcast_topic;
node [shape=ellipse]; state_machine_node;
node [shape=box]; state_topics; set_topics;
node [shape=ellipse]; referee_node;
groundtruth_services -> speaker_node [style=dotted, dir=both];
car_state_topic -> speaker_node [style=dotted];
speaker_node -> speaker_topics;
speaker_node -> broadcast_topic;
speaker_topics -> state_machine_node;
broadcast_topic -> referee_node;
state_machine_node -> state_topics;
set_topics -> state_machine_node;
state_topics -> referee_node;
referee_node -> set_topics;
subgraph speaker_topics {
rank="same"
speaker_topics
broadcast_topic
}
subgraph referee_topics {
rank="same"
state_topics
set_topics
}
}](../_images/graphviz-86c374f603ceb87e7931673f0f58961867ea7b0c.png)
Schema of the Evaluation Pipeline¶
SpeakerNode¶
The simulation.src.simulation_evaluation.src.speaker.node subscribes to the car’s state and accesses the groundtruth services. The SpeakerNode calls the groundtruth services for information about the simulated world. E.g. get the position of obstacles or if the car has to stop at an intersection. Additionally, the speaker node receives the car’s frame, i. e. the car’s dimensions, and it’s twist, i.e. linear and angular speed, through the car slate topic. Inside the node, there are multiple simulation.src.simulation_evaluation.src.speaker.speakers which use this information to complete different tasks:
simulation.src.simulation_evaluation.src.speaker.speakers.zone.ZoneSpeaker
: Provide contextual information (e.g. the car is allowed to overtake, the car is inside a parking area)simulation.src.simulation_evaluation.src.speaker.speakers.area.AreaSpeaker
: Encode where the car currently is (e.g. on the right side of the road)simulation.src.simulation_evaluation.src.speaker.speakers.event.EventSpeaker
: Detect events such as collisions, parking attempts, …simulation.src.simulation_evaluation.src.speaker.speakers.speed.SpeedSpeaker
: Convert the car’s speed into discrete intervals (e.g. between 60 and 70)simulation.src.simulation_evaluation.src.speaker.speakers.broacast.BroadcastSpeaker
: Provide high-level information about the drive (e.g. distance the car has driven,id and type of the section the car is currently driving in)
The speakers’ results are used for two purposes.
The Zone,-Area, Event,-and Speedspeaker create Speaker messages that the StateMachineNode’s state machines take as input.
The BroadcastSpeaker creates a Broadcast message that the RefereeNode uses to calculate a score.
The Speaker message carries a type (and a name for debugging).
Note
The SpeakerNode can be launched with:
roslaunch simulation_evaluation speaker_node.launch
StateMachineNode¶
The simulation.src.simulation_evaluation.src.state_machine.node.StateMachineNode
contains multiple state machines that are used to automatically track the car’s behavior over time.
The current state of each state machine is published as a State message on a separate topic.
State Machines¶
There are five state machines which track the state of the drive.
LaneStateMachine¶
This state machine keeps track of driving on the correct part of the road.
See simulation.src.simulation_evaluation.src.state_machine.state_machines.lane
for implementation details.
Graph of LaneStateMachine¶
OvertakingStateMachine¶
This state machine keeps track of overtaking obstacles.
See simulation.src.simulation_evaluation.src.state_machine.state_machines.overtaking
for implementation details.
Graph of OvertakingStateMachine¶
ParkingStateMachine¶
This state machine keeps track of parking.
See simulation.src.simulation_evaluation.src.state_machine.state_machines.parking
for implementation details.
Graph of ParkingStateMachine¶
PriorityStateMachine¶
This state machine keeps track of the car correctly stopping or halting in front of stop or halt lines.
See simulation.src.simulation_evaluation.src.state_machine.state_machines.priority
for implementation details.
Graph of PriorityStateMachine¶
ProgressStateMachine¶
This state machines keeps track if the car has started, is driving or has finished the drive.
See simulation.src.simulation_evaluation.src.state_machine.state_machines.progress
for implementation details.
Graph of ProgressStateMachine¶
Note
The StateMachineNode can be launched with:
roslaunch simulation_evaluation state_machine_node.launch
SpeedStateMachine¶
This state machine keeps track of how fast the car is going and whether it is keeping to the speed limits.
See simulation.src.simulation_evaluation.src.state_machine.state_machines.speed
for implementation details.
Graph of SpeedStateMachine¶
RefereeNode¶
The simulation.src.simulation_evaluation.src.referee.node
is used to evaluate the output of the state machines and calculate a score.
The referee publishes a Referee message containing a state, a score and more information about the current drive.
Note
The RefereeNode can be launched with:
roslaunch simulation_evaluation referee_node.launch
DriveTestNode¶
On start, the simulation.src.simulation_evaluation.src.drive_test.node
runs through
a series of steps to setup the simulation’s evaluation pipeline and subsequently start
KITcar_brain.
It then monitors the car’s progress and shuts down once the car has completed or failed the drive
based on the referee_node ‘s output.
Launch
Start the DriveTestNode and the complete simulation with
roslaunch simulation_evaluation drive_test.launch
However, the drive.test yields a much more interesting usage. It is a ROS test, that can be used to retrieve a binary evaluation if the car broke any rules while driving on a given road in a given mission mode.
ROS Test
Test the car’s behavior using the DriveTestNode with:
rostest simulation_evaluation drive.test
EvaluationTestNode¶
The SpeakerNode’s dependence on groundtruth services and the car state topic means that a complete road (for the groundtruth) and Gazebo (for the car’s state) is necessary to test and debug the nodes in this package.
The simulation.src.simulation_groundtruth.src.groundtruth.test.mock_node
can be used to create a groundtruth of simple predefined roads.
However, the car state is still missing: The simulation.src.simulation_evaluation.src.evaluation_test.node
can be used to create CarState messages as if the car is driving on a predefined path with a predefined speed.

RVIZ window with mocked groundtruth and blue car ‘driving’ on predefined path.¶
Available roads, paths, and other parameters are described in simulation_groundtruth/param/mock_groundtruth/default.yaml
Note
The EvaluationTestNode can be launched with:
roslaunch simulation_evaluation evaluation_test_node.launch
By default this also launches RVIZ; appending rviz:=false prevents opening RVIZ.
The speakers base their interpretation on the groundtruth, queried from simulation.src.simulation_groundtruth.src.groundtruth topics, and the current position and speed of the car published by the simulation.src.gazebo_simulation.src.car_state.node
.