Learning characteristics of AI-powered autonomous vehicles
There is something very compelling, but also puzzling about how real autonomous cars learn to.. drive.
The cars like Tesla, which have a level of autonomy in the driving, learn all the time. Their code — the program (sometimes called “routine”) is not a staticly written set of rules. Rather, the program learns to adapt to various circumstances that the car and driver will encounter.
Building autonomous guidance on planet like Earth, on its roads, sometimes even a bit off the roads — is a challenging task.
We humans, on the other hand, are basically very adaptive to changing conditions. We would exercise the principle of caution, if we found out that a totally novel situation was impending. Or in the worst case, we might crash.
Autonomous vehicles and the intelligence behind them is basically not that much different. They also exercise caution, so they would rather err on the side of safety, than take risks. The exposure and choice of risk level is more controllable in computers than in humans. We are biologically prone to lose interest in things that we find too static and dull, whereas computers are ever vigilant. Their consciousness does not drift to otherwordly issues or personal problems, in mid-flight. Computers just keep doing the drill of driving.
Driving for anyone who has completed a driver’s license, is a interesting mix of stuff: there is the knowledge body of your car, then second — you need to learn the rules of traffic, and also mastering the practical part of actually controlling the car through. Pedals, the steering wheel, a gear switching stick of some kind, and turn signaling will be your friends from the point of learning to drive, to however far you decide to drive a car.
We as drivers will be quickly adapting to the way a particular car has been designed. No one knows how to drive right from the birth. It’s an acquired skill.
And thus — as we age or face neurological challenges, the loss of this plasticity can lead to significant difficulties in maintaining the cognitive and motor skills required for safe driving. This decline is often implicated in the decision to withdraw a driver’s license, particularly in older adults.
Brain Plasticity and Driving
Driving involves a complex interplay of cognitive, sensory, and motor functions. These include:
- Cognitive processing: Making decisions, reacting to unexpected events, and planning routes.
- Sensory integration: Using visual, auditory, and tactile information to navigate and avoid hazards.
- Motor coordination: Controlling the steering wheel, pedals, and other vehicle systems in a synchronized manner.
Autonomous cars on the other hand learn driving by having the cars been immersed in real traffic. Autonomous cars see the world through a mix of sensors: visual, rangefinding lasers, sonic, and so on. In addition, they can be generally guided by GPS navigation systems.
But the intricate, and dynamic things that cars need to do, regardless of whether they are driven by a homo sapiens or a computer, are following:
- avoid obstacles
- keep a safe velocity (situational speed)
- be prepapred to stop the vehicle within the visible part of the road
I was watching the superbly interesting video by Tenenbaum, here:
Josh Tenenbaum — Scaling Intelligence the Human Way — IPAM at UCLA
Lets make some hands-on coolness with Python and vector graphics.
I love depicting simulations in simple 2D plane vector graphics.
Inputs of a car sensor system
Modalities in car could be really sophisticated, in reality, but let’s start with simple model again.
We just think of a thing, as “signal”. Regardless of what kind of thing it really is, we are going to “input” (verb) our system a new signal. The signal can be a level between -5 and 5, and it’s pretty obvious probably why we want to center this in 0: the reason is that driving is basically nothing more than left/right decisions.
In the end, we are interested in turning the wheel a bit to left, keep it steady, or to right. This is the rudimentary model of control.
Then there’s two pedals, that can be united and modeled probably as a “a” (acceleration) vector. This is the combined effect of a reality in a car: we can push the pedal (gas), or push break pedal. This affects
Outputs of the “car control system” in real AI-powered cars?
I am not sure, but would imagine something like this:
- Pedal motion — acceleration, or deceleration
- steering of front wheels (steering left/right)
- steering of the back wheels, if there is such possibility — usually not
Probabilistic programming?
Tenenbaum mentions probabilistic programming as a topic. I haven’t yet gotten that far in the video. Keeping watching it. Comment in here, if you find these kind of topics interesting, or are more into the programming / simulation ideas.