Ford Motor Company and the Massachusetts Institute of Technology are collaborating on a new research project that measures how pedestrians move in urban areas to improve certain public transportation services, such as ride-hailing and point-to-point shuttles services.
The project will introduce a fleet of on-demand electric vehicle shuttles that operate on both city roads and campus walkways on the university’s Cambridge, Massachusetts, campus. The vehicles use LiDAR sensors and cameras to measure pedestrian flow, which ultimately helps predict demand for the shuttles. This, in turn, helps researchers and drivers route shuttles toward areas with the highest demand to better accommodate riders.
“The onboard sensors and cameras gather pedestrian data to estimate the flow of foot traffic,” said Ken Washington, vice president of Research and Advanced Engineering at Ford. “This helps us develop efficient algorithms that bring together relevant data. It improves mobility-on-demand services, and aids ongoing pedestrian detection and mapping efforts for autonomous vehicle research.”
Using a high-tech lab
The MIT research is being conducted by the Aeronautics and Astronautics Department’s Aerospace Controls Lab. ACL researches topics related to autonomous systems and control design for aircraft, spacecraft, and ground vehicles. Theoretical and experimental research is pursued in such areas as estimation and navigation, planning and learning under uncertainty, and vehicle autonomy.
“Through the mobility-on-demand system being developed for MIT’s campus, ACL can investigate new planning and prediction algorithms in a complex, but controlled, environment, while simultaneously providing a testbed framework for researchers and a service to the MIT community,” said ACL director Professor Jonathan How.
Hailing a ride
Ford and MIT researchers plan to introduce the service to a group of students and faculty beginning in September. This group will use a mobile application to hail one of three electric urban vehicles to their location and request to be dropped off at another destination on campus.
The electric vehicles are small enough to be able to navigate the campus’s sidewalks, while still leaving plenty of room for traditional pedestrian traffic. Each is outfitted with weatherproof enclosures that shield out inclement weather – a feature particularly useful for New England’s punishing winters.
After requesting the shuttles via a smartphone app, MIT students and faculty won’t be waiting long for their ride to arrive.
During the past five months, Ford and MIT have used LiDAR sensors and cameras mounted to the vehicles to document pedestrian flow between different points on campus. LiDAR is the most efficient way to detect and localise objects from the environment surrounding the shuttles. The technology is much more accurate than GPS, emitting short pulses of laser light to precisely pinpoint the vehicles’ location on a map and detect the movement of nearby pedestrians and objects.
Using this data, researchers study the overall pattern of how pedestrian traffic moves across campus, which helps the researchers anticipate where the most demand for the shuttles will be at any given moment. This allows the shuttles to be carefully pre-positioned and routed to serve the MIT population as efficiently as possible.
Researchers also take into account other factors that affect pedestrian movement on MIT’s campus, such as varying weather conditions, class schedules, and the dynamic habits of students and professors across different semesters.
Applying learnings to mobility services and beyond
This collaboration further enhances Ford’s Dynamic Shuttle project, which provides point-to-point shuttle rides to employees requesting rides using a mobile application on its Dearborn, Michigan, campus. The collaboration advances the ride-hailing concept to new heights by examining the movement of pedestrians to predict demand and reduce wait times for shuttles.
What’s more, the algorithms and methods learned when navigating densely crowded pedestrian areas using LiDAR will also strengthen Ford’s autonomous and driver assist technologies as the company continues develop autonomous vehicles.
The project is one of more than 30 mobility solutions university research projects between Ford and universities in the U.S., Germany and China aimed at helping the company and academic world better understand how to improve mobility for millions of people globally.
University research partnerships are an important part of Ford’s broader effort to change the way the world moves. Ford Smart Mobility is the company’s plan to be a leader in connectivity, mobility, autonomous vehicles, the customer experience, and data and analytics.
Project Bloodhound saved
The British project to break the world landspeed record at a site in the Northern Cape has been saved by a new backer, after it went into bankruptcy proceedings in October.
Two weeks ago, and two months after entering voluntary administration, the Bloodhound Programme Limited announced it was shutting down. This week it announced that its assets, including the Bloodhound Supersonic Car (SSC), had been acquired by an enthusiastic – and wealthy – supporter.
“We are absolutely delighted that on Monday 17th December, the business and assets were bought, allowing the Project to continue,” the team said in a statement.
“The acquisition was made by Yorkshire-based entrepreneur Ian Warhurst. Ian is a mechanical engineer by training, with a strong background in managing a highly successful business in the automotive engineering sector, so he will bring a lot of expertise to the Project.”
Warhurst and his family, says the team, have been enthusiastic Bloodhound supporters for many years, and this inspired his new involvement with the Project.
“I am delighted to have been able to safeguard the business and assets preventing the project breakup,” he said. “I know how important it is to inspire young people about science, technology, engineering and maths, and I want to ensure Bloodhound can continue doing that into the future.
“It’s clear how much this unique British project means to people and I have been overwhelmed by the messages of thanks I have received in the last few days.”
The record attempt was due to be made late next year at Hakskeen Pan in the Kalahari Desert, where retired pilot Andy Green planned to beat the 1228km/h land-speed record he set in the United States in 1997. The target is for Bloodhound to become the first car to reach 1000mph (1610km/h). A track 19km long and 500 metres wide has been prepared, with members of the local community hired to clear 16 000 tons of rock and stone to smooth the surface.
The team said in its announcement this week: “Although it has been a frustrating few months for Bloodhound, we are thrilled that Ian has saved Bloodhound SSC from closure for the country and the many supporters around the world who have been inspired by the Project. We now have a lot of planning to do for 2019 and beyond.”
Motor Racing meets Machine Learning
The futuristic car technology of tomorrow is being built today in both racing cars and
toys, writes ARTHUR GOLDSTUCK
The car of tomorrow, most of us imagine, is being built by the great automobile manufacturers of the world. More and more, however, we are seeing information technology companies joining the race to power the autonomous vehicle future.
Last year, chip-maker Intel paid $15.3-billion to acquire Israeli company Mobileye, a leader in computer vision for autonomous driving technology. Google’s autonomous taxi division, Waymo, has been valued at $45-billion.
Now there’s a new name to add to the roster of technology giants driving the future.
Amazon Web Services, the world’s biggest cloud computing service and a subsidiary of Amazon.com, last month unveiled a scale model autonomous racing car for developers to build new artificial intelligence applications. Almost in the same breath, at its annual re:Invent conference in Las Vegas, it showcased the work being done with machine learning in Formula 1 racing.
AWS DeepRacer is a 1/18th scale fully autonomous race car, designed to incorporate the features and behaviour of a full-sized vehicle. It boasts all-wheel drive, monster truck tires, an HD video camera, and on-board computing power. In short, everything a kid would want of a self-driving toy car.
But then, it also adds everything a developer would need to make the car autonomous in ways that, for now, can only be imagined. It uses a new form of machine learning (ML), the technology that allows computer systems to improve their functions progressively as they receive feedback from their activities. ML is at the heart of artificial intelligence (AI), and will be core to autonomous, self-driving vehicles.
AWS has taken ML a step further, with an approach called reinforcement learning. This allows for quicker development of ML models and applications, and DeepRacer is designed to allow developers to experiment with and hone their skill in this area. It is built on top of another AWS platform, called Amazon SageMaker, which enables developers and data scientists to build, train, and deploy machine learning quickly and easily.
Along with DeepRacer, AWS also announced the DeepRacer League, the world’s first global autonomous racing league, open to anyone who orders the scale model from AWS.
As if to prove that DeepRacer is not just a quirky entry into the world of motor racing, AWS also showcased the work it is doing with the Formula One Group. Ross Brawn, Formula 1’s managing director of Motor Sports, joined AWS CEO Andy Jassy during the keynote address at the re:Invent conference, to demonstrate how motor racing meets machine learning.
“More than a million data points a second are transmitted between car and team during a Formula 1 race,” he said. “From this data, we can make predictions about what we expect to happen in a wheel-to-wheel situation, overtaking advantage, and pit stop advantage. ML can help us apply a proper analysis of a situation, and also bring it to fans.
“Formula 1 is a complete team contest. If you look at a video of tyre-changing in a pit stop – it takes 1.6 seconds to change four wheels and tyres – blink and you will miss it. Imagine the training that goes into it? It’s also a contest of innovative minds.”
Formula 1 racing has more than 500 million global fans and generated $1.8 billion in revenue in 2017. As a result, there are massive demands on performance, analysis and information.
During a race, up to 120 sensors on each car generate up to 3GB of data and 1 500 data points – every second. It is impossible to analyse this data on the fly without an ML platform like Amazon SageMaker. It has a further advantage: the data scientists are able to incorporate 65 years of historical race data to compare performance, make predictions, and provide insights into the teams’ and drivers’ split-second decisions and strategies.
This means Formula 1 can pinpoint how a driver is performing and whether or not drivers have pushed themselves over the limit.
“By leveraging Amazon SageMaker and AWS’s machine-learning services, we are able to deliver these powerful insights and predictions to fans in real time,” said Pete Samara, director of innovation and digital technology at Formula 1.