Ford has recent;y announced its intent to have a high-volume, fully autonomous SAE level 4-capable vehicles in commercial operation in 2021 in a ride-hailing or ride-sharing service.
To get there, the company is investing in or collaborating with four startups to enhance its autonomous vehicle development, doubling its Silicon Valley team and more than doubling its Palo Alto campus.
“The next decade will be defined by automation of the automobile, and we see autonomous vehicles as having as significant an impact on society as Ford’s moving assembly line did 100 years ago,” said Mark Fields, Ford president and CEO. “We’re dedicated to putting on the road an autonomous vehicle that can improve safety and solve social and environmental challenges for millions of people – not just those who can afford luxury vehicles.”
Autonomous vehicles in 2021 are part of Ford Smart Mobility, the company’s plan to be a leader in autonomous vehicles, as well as in connectivity, mobility, the customer experience, and data and analytics.
Driving autonomous vehicle leadership
Building on more than a decade of autonomous vehicle research and development, Ford’s first fully autonomous vehicle will be a Society of Automotive Engineers-rated level 4-capable vehicle without a steering wheel or gas and brake pedals. It is being specifically designed for commercial mobility services, such as ride sharing and ride hailing, and will be available in high volumes.
“Ford has been developing and testing autonomous vehicles for more than 10 years,” said Raj Nair, Ford executive vice president, Global Product Development, and chief technical officer. “We have a strategic advantage because of our ability to combine the software and sensing technology with the sophisticated engineering necessary to manufacture high-quality vehicles. That is what it takes to make autonomous vehicles a reality for millions of people around the world.”
This year, Ford will triple its autonomous vehicle test fleet to be the largest test fleet of any automaker – bringing the number to about 30 self-driving Fusion Hybrid sedans on the roads in California, Arizona and Michigan, with plans to triple it again next year.
Ford was the first automaker to begin testing its vehicles at Mcity, University of Michigan’s simulated urban environment, the first automaker to publicly demonstrate autonomous vehicle operation in the snow and the first automaker to test its autonomous research vehicles at night, in complete darkness, as part of LiDAR sensor development.
To deliver an autonomous vehicle in 2021, Ford is announcing four key investments and collaborations that are expanding its strong research in advanced algorithms, 3D mapping, LiDAR, and radar and camera sensors:
- Velodyne: Ford has invested in Velodyne, the Silicon Valley-based leader in light detection and ranging (LiDAR) sensors. The aim is to quickly mass-produce a more affordable automotive LiDAR sensor. Ford has a longstanding relationship with Velodyne, and was among the first to use LiDAR for both high-resolution mapping and autonomous driving beginning more than 10 years ago
- SAIPS: Ford has acquired the Israel-based computer vision and machine learning company to further strengthen its expertise in artificial intelligence and enhance computer vision. SAIPS has developed algorithmic solutions in image and video processing, deep learning, signal processing and classification. This expertise will help Ford autonomous vehicles learn and adapt to the surroundings of their environment
- Civil Maps: Ford has invested in Berkeley, California-based Civil Maps to further develop high-resolution 3D mapping capabilities. Civil Maps has pioneered an innovative 3D mapping technique that is scalable and more efficient than existing processes. This provides Ford another way to develop high-resolution 3D maps of autonomous vehicle environments
- Nirenberg Neuroscience LLC: Ford has an exclusive licensing agreement with Nirenberg Neuroscience, a machine vision company founded by neuroscientist Dr. Sheila Nirenberg, who cracked the neural code the eye uses to transmit visual information to the brain. This has led to a powerful machine vision platform for performing navigation, object recognition, facial recognition and other functions, with many potential applications. For example, it is already being applied by Dr. Nirenberg to develop a device for restoring sight to patients with degenerative diseases of the retina. Ford’s partnership with Nirenberg Neuroscience will help bring humanlike intelligence to the machine learning modules of its autonomous vehicle virtual driver system
Silicon Valley expansion
Ford also is expanding its Silicon Valley operations, creating a dedicated campus in Palo Alto.
Adding two new buildings and 150,000 square feet of work and lab space adjacent to the current Research and Innovation Center, the expanded campus grows the company’s local footprint and supports plans to double the size of the Palo Alto team by the end of 2017.
“Our presence in Silicon Valley has been integral to accelerating our learning and deliverables driving Ford Smart Mobility,” said Ken Washington, Ford vice president, Research and Advanced Engineering. “Our goal was to become a member of the community. Today, we are actively working with more than 40 startups, and have developed a strong collaboration with many incubators, allowing us to accelerate development of technologies and services.”
Since the new Ford Research and Innovation Center Palo Alto opened in January 2015, the facility has rapidly grown to be one of the largest automotive manufacturer research centers in the region. Today, it is home to more than 130 researchers, engineers and scientists, who are increasing Ford’s collaboration with the Silicon Valley ecosystem.
Research and Innovation Center Palo Alto’s multi-disciplinary research and innovation is the newest of nearly a dozen of Ford’s global research, innovation, IT and engineering centers. The expanded Palo Alto campus opens in mid-2017.
Project Bloodhound saved
The British project to break the world landspeed record at a site in the Northern Cape has been saved by a new backer, after it went into bankruptcy proceedings in October.
Two weeks ago, and two months after entering voluntary administration, the Bloodhound Programme Limited announced it was shutting down. This week it announced that its assets, including the Bloodhound Supersonic Car (SSC), had been acquired by an enthusiastic – and wealthy – supporter.
“We are absolutely delighted that on Monday 17th December, the business and assets were bought, allowing the Project to continue,” the team said in a statement.
“The acquisition was made by Yorkshire-based entrepreneur Ian Warhurst. Ian is a mechanical engineer by training, with a strong background in managing a highly successful business in the automotive engineering sector, so he will bring a lot of expertise to the Project.”
Warhurst and his family, says the team, have been enthusiastic Bloodhound supporters for many years, and this inspired his new involvement with the Project.
“I am delighted to have been able to safeguard the business and assets preventing the project breakup,” he said. “I know how important it is to inspire young people about science, technology, engineering and maths, and I want to ensure Bloodhound can continue doing that into the future.
“It’s clear how much this unique British project means to people and I have been overwhelmed by the messages of thanks I have received in the last few days.”
The record attempt was due to be made late next year at Hakskeen Pan in the Kalahari Desert, where retired pilot Andy Green planned to beat the 1228km/h land-speed record he set in the United States in 1997. The target is for Bloodhound to become the first car to reach 1000mph (1610km/h). A track 19km long and 500 metres wide has been prepared, with members of the local community hired to clear 16 000 tons of rock and stone to smooth the surface.
The team said in its announcement this week: “Although it has been a frustrating few months for Bloodhound, we are thrilled that Ian has saved Bloodhound SSC from closure for the country and the many supporters around the world who have been inspired by the Project. We now have a lot of planning to do for 2019 and beyond.”
Motor Racing meets Machine Learning
The futuristic car technology of tomorrow is being built today in both racing cars and
toys, writes ARTHUR GOLDSTUCK
The car of tomorrow, most of us imagine, is being built by the great automobile manufacturers of the world. More and more, however, we are seeing information technology companies joining the race to power the autonomous vehicle future.
Last year, chip-maker Intel paid $15.3-billion to acquire Israeli company Mobileye, a leader in computer vision for autonomous driving technology. Google’s autonomous taxi division, Waymo, has been valued at $45-billion.
Now there’s a new name to add to the roster of technology giants driving the future.
Amazon Web Services, the world’s biggest cloud computing service and a subsidiary of Amazon.com, last month unveiled a scale model autonomous racing car for developers to build new artificial intelligence applications. Almost in the same breath, at its annual re:Invent conference in Las Vegas, it showcased the work being done with machine learning in Formula 1 racing.
AWS DeepRacer is a 1/18th scale fully autonomous race car, designed to incorporate the features and behaviour of a full-sized vehicle. It boasts all-wheel drive, monster truck tires, an HD video camera, and on-board computing power. In short, everything a kid would want of a self-driving toy car.
But then, it also adds everything a developer would need to make the car autonomous in ways that, for now, can only be imagined. It uses a new form of machine learning (ML), the technology that allows computer systems to improve their functions progressively as they receive feedback from their activities. ML is at the heart of artificial intelligence (AI), and will be core to autonomous, self-driving vehicles.
AWS has taken ML a step further, with an approach called reinforcement learning. This allows for quicker development of ML models and applications, and DeepRacer is designed to allow developers to experiment with and hone their skill in this area. It is built on top of another AWS platform, called Amazon SageMaker, which enables developers and data scientists to build, train, and deploy machine learning quickly and easily.
Along with DeepRacer, AWS also announced the DeepRacer League, the world’s first global autonomous racing league, open to anyone who orders the scale model from AWS.
As if to prove that DeepRacer is not just a quirky entry into the world of motor racing, AWS also showcased the work it is doing with the Formula One Group. Ross Brawn, Formula 1’s managing director of Motor Sports, joined AWS CEO Andy Jassy during the keynote address at the re:Invent conference, to demonstrate how motor racing meets machine learning.
“More than a million data points a second are transmitted between car and team during a Formula 1 race,” he said. “From this data, we can make predictions about what we expect to happen in a wheel-to-wheel situation, overtaking advantage, and pit stop advantage. ML can help us apply a proper analysis of a situation, and also bring it to fans.
“Formula 1 is a complete team contest. If you look at a video of tyre-changing in a pit stop – it takes 1.6 seconds to change four wheels and tyres – blink and you will miss it. Imagine the training that goes into it? It’s also a contest of innovative minds.”
Formula 1 racing has more than 500 million global fans and generated $1.8 billion in revenue in 2017. As a result, there are massive demands on performance, analysis and information.
During a race, up to 120 sensors on each car generate up to 3GB of data and 1 500 data points – every second. It is impossible to analyse this data on the fly without an ML platform like Amazon SageMaker. It has a further advantage: the data scientists are able to incorporate 65 years of historical race data to compare performance, make predictions, and provide insights into the teams’ and drivers’ split-second decisions and strategies.
This means Formula 1 can pinpoint how a driver is performing and whether or not drivers have pushed themselves over the limit.
“By leveraging Amazon SageMaker and AWS’s machine-learning services, we are able to deliver these powerful insights and predictions to fans in real time,” said Pete Samara, director of innovation and digital technology at Formula 1.