Keeping Self-driving Decisions Simple: Starsky Robotics’ Behavior Planning
When people ask me what I do for a living, the shortest answer I give is: I help build safe autonomous trucks. I’ve been working on the behavior planning layer of Starsky Robotics’ autonomous driving stack for the last two years. Put simply, I develop software that allows our trucks to make good decisions, and I work with the rest of the team to show that the system is doing the right thing.
“Behavior planning” is a bit of an unusual term, even within the industry, so let me first explain what we at Starsky mean by it. The behavior planning layer is responsible for the high level decision making of the truck, which we take to be decisions that have a few seconds or more lead time, like lane changes, taking an off-ramp, or pulling over to the side of the road in case of emergency.
At Starsky, we are able to keep this decision making surprisingly simple because our trucks need to drive themselves only on highways. This simplicity is good; it is exactly what will allow us to deploy a real, validated, autonomous trucking solution.
Starsky’s behavior planning makes decisions by taking information about the environment around the truck from our perception stack, about where we want to go from a route plan, and about the current truck state from the truck’s and our system’s internal monitoring. It then checks against preset rules to see which behaviors are possible, and carries out a prediction to see which one is best to execute.
For example, to see if a right lane change is possible, we would first check that a lane to the right of the truck exists. Then, we would predict how the scene around us would evolve for two scenarios: one where we change lanes, and one where we stay in our current lane. Finally, we would evaluate these two predictions, and pick the one which gives the best outcome.
Ok, but there’s a large number of behaviors that you could choose to do at any point in time, right? There could be a slow lane change, a fast lane change, a slow down, a come to a stop, and anything else you can think of really, and that’s before considering possible maneuver sequences. And that prediction step is difficult – it depends on the intentions and movements of all the cars, trucks, bicycles and pedestrians around you. Plus, how those agents move depends both on the other agents, and on the movement of our truck. You can see how this has the tendency to explode into many, many possibilities, which all need to be calculated and evaluated. This complexity has a cost, both in terms of compute needed, and in terms of our ability as humans to understand and validate our system. It is not simple.
Why we love trucks
So what can we do? Well, at Starsky we simplify the behavior planning’s job by reducing each of the numbers in the equation below. First let’s take the set of possible behaviors.
BP Difficulty = (Number of Possible Behaviors) * (Prediction Complexity)
Semi trucks are large and heavy. They take a long time to speed up and slow down. They are not very maneuverable. They are not very fast. They do not have passengers that are impatient to get to their destination. The more conservative action is generally best for a truck. Trucks spend the vast majority of their time on highways, where only a limited number of maneuvers are required.
The highway environment is highly constrained – it needs to be for humans to be allowed to drive so fast on them – so we can define simple rules about which behaviors are possible when (going back to the basic example of not being able to do a right lane change if there is no right lane). All of this adds up to only a very small number of behaviors being possible at a given point in time.
The highway environment also greatly reduces prediction complexity. We do not have to consider complex pedestrian traffic like that at intersections and crosswalks (which I, staying true to my Irish vocabulary, would call zebra crossings). There are no bicycle lanes. There is no cross traffic. Because of the limited number of maneuvers possible, the intentions of other vehicles are much easier to identify. Cars generally travel in their lane, with lane changes and merges. This greatly simplifies the prediction step. We still need to think about inter-dependencies between vehicles around us, but they are fewer in number and in type. It is still hard (and very, very interesting to work on), but it is a much more tractable problem.
At Starsky, we want to keep our system simple. Building an autonomous truck is a complex, uncertain engineering task no matter what, so we aim to reduce complexity wherever we can, while still reaching our goal of hauling freight with unmanned trucks.
Our trucks are autonomous on highways, where decision making is more straightforward, and are teleoperated off highways, where a human has the edge. We operate within a strict operational design domain: we pre-approve routes before running them, and have a defined set of weather and other environmental conditions under which we operate. If an unusual situation arises over the course of a trip which our autonomous system is not capable of handling, a teleop driver will be alerted, and can take over control of the truck.
All of this keeps our decision making simple.
This has advantages in the everyday development and debugging of behavior planning, but the major benefit is that it allows us to fully validate our system. We can identify a full list of possible failure modes and proactively address these in our design and implementation. This failure mode analysis allows us to define safety requirements on detecting and reacting to system failures. We can then develop tests on and off trucks that fully validate that we are meeting both our normal decision making requirements, and our safety requirements. This is Starsky’s approach to safety across the system and it is what makes our solution real. Simple is good — it allows us to be safe, to know we’re safe, and to show we’re safe.
Keeping Self-driving Decisions Simple: Starsky Robotics’ Behavior Planning was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.