Autonomous material handling supports self-driving car safety

Features - Vision navigation

High-tech lift trucks and material handling systems share environments with pedestrians and human-operated vehicles, showing that autonomous vehicle technology can be used safely.

Subscribe
December 7, 2016
By Robert Schoenberger
An autonomous vision-guided vehicle (VGV) delivers materials at a Whirpool washing machine plant. Upfitted for autonomy by Seegrid, the vehicle uses five sets of twin cameras to develop a 3D view of its environment.

Whether it’s passenger cars or Class 8 trucks, autonomous vehicles need to sense the environments around them, identify fixed and moving obstacles in their paths, and safely navigate around them. None of those challenges are simple, and engineers and academics are debating which technology is best suited for sensing and how algorithms will instruct vehicles to react to obstacles.

While experts debate how to get cars to drive themselves on public roads, autonomous material handling devices already share space with pedestrians and human-navigated vehicles, safely navigating around obstacles to reach their destinations.

“We live in a hybrid environment every day. Our solutions can read their environments and react, so we know that autonomous vehicles can be safe – safer than human-controlled devices,” says Jim Rock, CEO of Seegrid, a company that integrates material handling trucks with cameras, sensors, and control equipment, allowing them to work autonomously. In addition to outfitting autonomous vehicles for motor vehicle manufacturing, the company hopes to provide its technologies to its customers for use in future vehicles.

Clearly, there are big differences between manufacturing plants and public streets. Both deal with a mix of pedestrian and vehicular traffic, but the safety risks tend to be lower with 5mph tow motors than with 75mph sports cars. Still, the environments are similar enough to show how self-driving vehicles could be integrated into public traffic, Rock says.

Evolving autonomy

Fully autonomous material handling systems are new in manufacturing – an offshoot of automated guided vehicles (AGVs). AGVs follow programmed paths, typically by following routes defined by magnetic tape on shop floors or hard-wired systems built into the floors. Sensors alert AGVs when pedestrians are too close so the vehicles can stop, but they typically don’t choose their own paths.

Autonomous systems require more computing power and sensors. Users program destinations into the system, but the vehicles determine the paths to take. Seegrid uses vision-based systems to sense the environment and react to obstacles. Competitors use vision, light detection and ranging (LiDAR), radar, and laser sensors, for navigation. Regardless of the technology, the goal is the same – a vehicle that can direct itself around an environment without hard guidelines.

Jeff Christensen, vice president of products and services at Seegrid, says that leap from automated to autonomous mirrors the debate the motor vehicles industry has been having for decades. Government agencies, communities, and uni- versities have spent decades studying connected roads and highways – systems that could communicate with cars to direct vehicles away from congested areas or take control of vehicles, effectively having the road manage the vehicle.

“There are vehicle-to-environment, vehicle-to-infrastructure, and vehicle-to-vehicle systems. If you wanted to, you could rip up roads and redo them today, and put down the equivalent of a magnetic wire in each road,” Christensen says. “Navigation would be an easy problem to solve, but that’s not tenable.”

Vision-based navigation

Seegrid’s material handling systems, vision-guided vehicles (VGVs), use five sets of dual cameras to create 360° views of the environment around the vehicle. As the vehicle moves, cameras refresh the 3D view, and software tells the machines what items in the environment have changed as the vehicle’s position changes. Algorithms determine which of the objects the cameras have sensed are fixed or moving, and how to respond to those changes in the environment.

“If you think of human vision as a metaphor, you only focus on a small percentage of what your eyes see. Your depth perception is better up close than it is far away, and you can drive a car quite successfully,” Christensen explains.

He adds that Seegrid engineers determined early in the system’s development that stereo cameras generate images with depth perception, similar to how people see. Rather than programming dimensions of a workspace, Seegrid’s vision system creates its own worldview and reacts to it.

Christensen says, “We make statistical and mathematical deductions of what we see. We determine that there’s a wall over there, and the ceiling is this far, and that there are some obstructions ahead of us. It gives us an understanding of our environment in an emergent way. We’re not anticipating that there’s a specific infrastructure in place. That’s how humans drive. We see indicators like lane markers or traffic lights, but we perceive the environment in a very abstract way and make inferences based on what we see.”

Intuitive programming

To program the vehicles, an operator drives the route once, recording the path the vehicle should take. Route data flows from one machine to another, so programming one truck with one route effectively creates that as an option for all VGVs on the system. Onboard computers compare their instructions to the 3D images of their environments created by the cameras to determine optimal routes. Christensen says the sensors create a 30m bubble around the vehicle, enough to safely navigate around obstacles.

Rock adds, “A lot of what you need to control is within 30m of the vehicle – adaptive cruise control, lane adherence, parking assist. Those things need a physical understanding of the environment within 30m. GPS tells you nothing about what’s up close. You have to understand your immediate surroundings, but you also have to understand the bigger picture of the world around you, so OEMs would have to marry vision systems to other systems.”

Technology enablers

Vision systems have been gaining popularity in automotive applications because of low costs and increasing sophistication. Lane-departure warning systems use cameras mounted beneath the front bumper to read the road – identifying lane lines to ensure that vehicles are on the proper path. Manufacturers also use LiDAR, radar, and ultrasound sensing for advanced driver assistance systems (ADAS), and many vehicles combine sensing technologies.

Christensen expects vision systems to develop more quickly because massive amounts of cash are flowing into cameras and sensors from multiple industries.

“Development of imaging sensors across all industries dwarfs what’s going into LiDAR-based hardware. Every new cell phone has to have a better camera in it every year,” Christensen says.

Add in industrial image-processing systems, gaming console graphics processing units (GPUs), and military pattern recognition and imaging software, and vision system investments “are orders of magnitude higher than all other sensor technologies combined.”

Rock adds that cheap computing power is lowering costs for most sensing technologies, but the massive mobile device consumer market favors optical systems.

“The sensors are becoming not just powerful, they’re getting cheap,” Rock says. “Cameras are dollars, and lasers are thousands of dollars. They’re coming down to hundreds of dollars, but when that happens, cameras will be down to pennies.”

Autonomous vision-guided vehicles (VGV) must navigate work floors with pedestrian and vehicular traffic, similar to the challenge facing autonomous cars and trucks.
Integrating systems

Seegrid’s system for lift trucks and tuggers has a sensor suite on top that houses the cameras. VGVs don’t have particularly stringent aesthetic requirements, so a giant bubble on top isn’t a problem.

For passenger vehicles, Christensen says engineers will work with designers to hide camera pairs discretely inside and outside the vehicle. Some companies are already doing that. Cadillac’s CT-6 and Ford’s Interceptor police vehicle have optional systems that generate 3D views of the cars’ surroundings for theft-protection.

Sensors that enable autonomous driving are safety critical, so they’ll need to be positioned carefully around the car, Christensen says. One solution would be to package cameras inside the car, as Subaru does for an ADAS system, putting stereo cameras between the rear-view mirror and windshield. But some cameras will need to be outside, creating packaging challenges.

“If software is controlling the image quality of what you’re getting back from the camera, it can review that and say, ‘Oh, you have schmutz on your lens.’ That can trigger a system that will spray a cleaning fluid on the camera to clear it,” he explains.

Rock says he expects companies to adopt several sensor technologies as they study autonomy to be sure they’re offering the safest systems possible. Early test vehicles from Google and others have featured four or five sensor systems.

“We think vision is the way to go, but we don’t really know yet what the perfect combination is,” Rock says. “So it’s OK to overpay for safety. We rely on vision systems, but our industrial trucks use laser sensors from Sick, two of them, to ensure safety. Regulations in the industry require it. In the future, cameras should have the evidence that they can keep us just as safe, and we’ll be able to drop the lasers.”

Seegrid Vision

www.seegrid.com

About the author: Robert Schoenberger is the editor of TMV and can be reached at 216.393.0271 or rschoenberger@gie.net