Sci-fi and the mind-boggling tech crunches fulfilled our childhood. Maturity revealed the undercurrent truth in the science fiction. What seemed to be a magic of the fairies, a trick of the witches, a conspiracy of the aliens is nothing but the tweak of technology. Who imagined that Johnny Cab could actually hit the real roads as autonomous cars?!
What if we can minimize or completely eliminate accidents by taking human error out of the picture which is reason for almost 94% of such mishaps. This idea is brought to reality through the concept of Self-Driving cars.
A Self Driving car car, also known as an autonomous vehicle (AV), is a vehicle that is capable of sensing its environment and moving safely with little or no human input. A human passenger is not required to take control of the vehicle at any time, nor is a human passenger required to be present in the vehicle at all. An autonomous car can go anywhere a traditional car goes and do everything that an experienced human driver does. There are many profound benefits that could arise from a driverless future. Instead of focusing on lanes, turns and traffic, you can reply to your emails, eat relax and make your morning much more productive.
Tales from the Past:
Journey Of the concept "self-driving automobile" begins from 1470s with a rough blueprint of "self- propelled car" created by Leo da. But it was only in the 20th century when concerted efforts were in this field. In 1925, Houdina Radio Control demonstrated the "American Wonder" on New York City streets and in 1926, the phantom motorcar operated by Akun motor company haunt the streets of Milwaukee but both was controlled by radio wave.
In GM’s 1939 exhibit, Norman Bel Geddes created the first self-driving car, which was an electric vehicle guided by radio-controlled electromagnetic fields generated with magnetized metal spikes embedded in the roadway. By 1958, General Motors had made this concept a reality. GM's vision of the future was, 1960 is filled with cars controlled via radio what propelled by electromagnets. It was 1980s the DARPA-funded Autonomous Land driven Vehicle (ALV) project in the United States made use of new technologies and introduced 1st self-driving automobile technology that used LIDAR, computer vision and autonomous robotic control to direct a robotic vehicle at speeds of up to 31 km/h.
Research on computer-controlled vehicles began at Carnegie Mellon in 1984 as part of the DARPA Strategic Computing Initiative and they produce there first vehicle, Navlab 1, in 1986.
By 1989, Carnegie Mellon University had pioneered the use of neural networks to steer and control autonomous vehicle. The 1st commercially available car was Navia launched in 2014 with max speed of 12 miles per hour. But it was only being used to transporting passengers more advanced self-driving vehicles.
The basic functioning of an Autonomous Vehicle can be classified into several task which are further classified into different sub-tasks:
Perception or perceiving the environment- It includes tracking a car’s motion and identifying the various elements in the world around us. (road signs, vehicles, pedestrians etc.)
Motion planning – Planning how to reach from point A to point B
Vehicle Control – Taking the appropriate steering, brake and accelerations decisions to control the vehicle’s position and velocity on the road.
Operational Design Domain (ODD)
constitutes the function under which a given system is designed to function (includes environmental, time of day, roadway and other characteristics under which the car will perform reliably)
Levels of automation:
Before classifying the automotive vehicles on the basis if the level of automation, it is essential to know about the different driving task which needs to be performed:
Lateral Control: Task of steering and navigating laterally on the road
Longitudinal Control: Task of controlling the velocity of the car through actions like braking and acceleration.
Object and Event Detection and Response (OEDR): It is essentially the ability to detect objects and events that immediately affect the driving task and to react to them appropriately.
Planning: Primarily concerned with the long- and short-term plans needed to travel to a destination or execute maneuvers like lane changes and intersection crossing.
Miscellaneous: Actions like signaling with indicators, interacting with other drivers and so on.
The Society of Automotive Engineers (SAE) currently defines 6 levels of driving automation (SAE Standard J3 016) ranging from Level 0 (fully manual) to Level 5 (fully autonomous). These levels have been adopted by the U.S. Department of Transportation.
Autonomous cars depend upon a number of hardware and software components for its functioning.
Sensors: Device that measures or detects a property of the environment or changes to a property. They are classified into two categories:
Exteroceptive (They record the properties of the environment)
Proprioceptive (They record the properties of the ego vehicle)
Some widely used Exteroceptive sensors are:
Cameras: They are passive light-collecting sensors capable of capturing rich, detailed information about a scene. The parameters on which the choice of camera is done are
Resolution (Number of pixels which makes the image, specifying the quality of the image)
Field of View (Horizontal and vertical angular extent that is visible to the camera)
Dynamic Range (Difference between the darkest and lightest tones in an image)
LIDAR (Light Detection and Ranging Sensor): LIDAR sensing involves shooting light beams into the environment and measuring the reflected return. By measuring the amount of returned light and time of flight of the beam, both in intensity in range to the reflecting object can be estimated. LIDAR does not face any challenges in poor lighting condition.
RADAR (Radio Detection and Ranging): They robustly detect large object in the environment. They are useful in adverse weather as they are mostly unaffected by precipitation.
SONAR (Sound Navigation and Ranging): These sensors measure the range using sound waves. Sonar are short-range in inexpensive ranging devices which makes them good for parking scenarios.
For all these sensors, the primary parameters on which their efficiency depends are:
Range of the Sensors
Field of View
Autonomous cars create and maintain a map of their surroundings based on a variety of sensors situated in different parts of the vehicle. Radar sensors monitor the position of nearby vehicles. Video cameras detect traffic lights, read road signs, track other vehicles, and look for pedestrians. Lidar sensors bounce pulses of light off the car’s surroundings to measure distances, detect road edges, and identify lane markings. Ultrasonic sensors in the wheels detect curbs and other vehicles when parking.
Proprioceptive Sensors includes:
Global Navigation Satellite Systems (GNSS) and Inertial Measuring Units (IMU): GNSS receivers are used to measure the vehicle position, velocity and sometimes heading. Apart from these IMU also measures the angular rotation rate and the acceleration of the ego vehicle and the combined measurements can be used to estimate the orientation of the vehicle.
Wheel Odometry Sensors: This sensor tracks the wheel rate of rotation and uses it to estimate the speed and heading rate of the car. This is the same sensor which tracks the mileage of the car.
Computer Hardware: The most important component is the brain which is the main decision making unit of the car. It takes in all the sensor data and outputs the commands needed to drive the car.
Most companies prefer to design their own computing system that matches their own sensors and algorithms. Common examples are Nvidia’s Drive PX and Intel and Mobileye’s EyeQ. Any computing brain for self-driving needs both serial and parallel compute modules, particularly for image and LIDAR processing to do segmentation, object detection and mapping.
For these the following are employed:
GPU: Graphic Processing Unit
FPGA: Field Programmable Field Array
ASIC: Application Specific Integrated Chip
The software architecture includes the following five software modules, environment perception, environment mapping, motion planning, vehicle control, and finally the system supervisor.
We know that the car observes the environment around it, using a variety of sensors. The raw sensor measurements are passed into two sets of modules dedicated to understanding the environment around the car, the environment perception and mapping.
The environment perception modules have two key responsibilities, first, identifying the current location of the autonomous vehicle in space, and second, classifying and locating important elements of the environment for the driving task. Examples of these elements include other cars, bikes, pedestrians, the road, road markings, and road signs, anything that directly affects the act of driving. there are two important parts of the perception stack, localizing the ego-vehicle in space, as well as classifying and locating the important elements of the environment.
The environment mapping module creates a set of maps which locate objects in the environment around the autonomous vehicle for a range of different uses, from collision avoidance to egomotion tracking and motion planning. Environment maps create several different types of representation of the current environment around the autonomous car. There are three types of maps, the occupancy grid map, the localization map and the detailed road map.
The motion planning module makes all the decisions about what actions to take and where to drive based on all of the information provided by the perception and mapping modules. The motion planning module's main output is a safe, efficient and comfortable planned path that moves the vehicle towards its goal. The planned path is then executed by the fourth module, the controller.
The controller module takes the path and decides on the best steering angle, throttle position, brake pedal position, and gear settings to precisely follow the planned path. These are actually like the actuators. They are the hands of a car. A typical controller separates the control problem into a longitudinal control and a lateral control. The lateral controller outputs the steering angle required to maintain the planned trajectory, whereas the longitudinal controller regulates the throttle, gears and braking system to achieve the correct velocity.
The fifth and final module is the system supervisor. The system supervisor monitors all parts of the software stack, as well as the hardware output, to make sure that all systems are working as intended. The system supervisor is also responsible for informing the safety driver of any problems found in the system.
IoT (Internet of Things) Enabled Autonomous Car:
Think of a scenario where we can control the actions of our car from our mobile phones or PCs. Irrespective of the fact, where our car is parked, it would be a just a click away from us. Just like we can control the Wi-Fi enabled Air-Conditioners or other electronic devices in our home from anywhere in the world, our cars would be equipped with the same facility. We don’t have to go all the way where our car is parked, rather we can just use technologies like Google Assistant, Siri, Alexa or Cortana to call our car to our location. Here we have tried to develop such a prototype where we will be able to implement this idea.
Requirements and Procedure:
The car has to be provided with a Transmitter-Receiver Module (Wi-Fi Modules like ESP8266) which will always have to remain connected with the internet.
Wi-Fi Module will remain connected to a Cloud Service (A cloud server is virtual server running in a cloud computing environment. It is built, hosted and delivered via a cloud computing platform via the internet, and can be accessed remotely.) which on receiving a trigger will send the appropriate message to our car. In this context, it will send it a trigger along with our accurate location to the car through the Wi-Fi Module, for the car to come to our location. The autonomous car will then simply use its mapping technology to reach our location.
A Web-Based Service like IFTTT (IF This Then That) has to be used to send a trigger to the cloud on our instruction. Here we will create an applet connecting our Google Assistant (or any other virtual assistant) to the Cloud Service. The applet will be triggered immediately when we pass a particular command to the Google Assistant along with our location. The applet further trigger the cloud and Step 2 will be carried out.
This is quite similar to a human starting an autonomous car with the only difference being that here the cloud gives the instructions to the car rather than the person himself.
This technology can be also used in parking the vehicles in multi-level parking (like in shopping malls or large buildings) in case of which the car will retrace its path (the path will be stored in its memory) to reach the user.
Such a technology can make lives of the people quite easy and hazard-less since they will be able to operate their cars in any emergency situation even if the car is not very close to them. This will save people a lot of time and make life a lot faster and productive.
This is just a prototype of the idea and further changes may be required in the algorithm to make it more efficient.
Recent implementation- TESLA Application:
The Tesla app is the firm’s official smartphone app, which allows owners to connect their smartphones directly to their cars, giving them access to remotely control features.
One can remotely control climate of car, can check status of battery, can move car in front and rear, one can also monitor the surrounding area and car in this application.
But one of the recent feature is Smart Summon which introduced in 2019.
Smart Summon: Smart Summon is Tesla's autonomous parking feature, which enables a Tesla vehicle to leave a parking space and navigate around obstacles to its owner.
With the help of this feature car can navigate more complex environment and parking spaces.
Smart Summon is only intended to use in private parking lots and driveways
Smart Summon requires the latest version of Tesla Mobile app.
Presence of owner must be within 200 feet of his/her car.
According to Tesla, owner is responsible for his/her car and must monitor it and it surroundings at all times and be within your line of sight because it may not detect all obstacles.
A summoned car have a max speed around 6 miles per hour. That’s slow enough to stop instantly and this reduces the chance of serious injury.
Market survey and future business scenario:
The global autonomous vehicle market demand is estimated to be at approx. 6.7 thousand units in 2020 and is anticipated to expand at a CAGR of 63.1% from 2021 to 2030. Self-driving cars, also known as autonomous vehicles (AV), are a key innovation in the automotive industry. They have high growth potential and are acting as a catalyst in the technological developments of automobiles.
Intel and Strategy Analytics estimates a $7 trillion boost from the arising industry. $2 million from US itself.
Market share insights:
Some of the key players operating in the market are Audi AG, BMW AG, Daimler AG, Ford Motor Company, General Motors Company, Google LLC, Honda Motor Co., Ltd., Nissan Motor Company, Tesla, Toyota Motor Corporation, Uber Technologies, Volvo Car Corporation, and Volkswagen AG.
Leading players in the self-driving vehicle market are involved in collaborations and the development of innovative solutions to stay ahead of the competition.
The memorandum is aimed at combining the resources, capabilities, and strengths of all the companies to reduce time to market, increase development efficiency, and enhance the platform’s technology. Such initiatives are likely to substantially contribute to the market growth.
For instance, in August 2017, BMW AG, Intel Corporation, Mobileye, which is a subsidiary of Intel Corporation, and Fiat Chrysler Automobiles (FCA) signed a memorandum of understanding for FCA to join the companies in the development of an autonomous vehicle driving platform.