How AI Is Taking Over Our Gadgets
Kanebridge News
Share Button

How AI Is Taking Over Our Gadgets

AI is moving from data centres to devices, impacting everything from phones to tractors.

By Christopher Mims
Wed, Jun 30, 2021 10:25amGrey Clock 5 min

If you think of AI as something futuristic and abstract, start thinking different.

We’re now witnessing a turning point for artificial intelligence, as more of it comes down from the clouds and into our smartphones and automobiles. While it’s fair to say that AI that lives on the “edge”—where you and I are—is still far less powerful than its datacentre-based counterpart, it’s potentially far more meaningful to our everyday lives.

One key example: This fall, Apple’s Siri assistant will start processing voice on iPhones. Right now, even your request to set a timer is sent as an audio recording to the cloud, where it is processed, triggering a response that’s sent back to the phone. By processing voice on the phone, says Apple, Siri will respond more quickly. This will only work on the iPhone XS and newer models, which have a compatible built-for-AI processor Apple calls a “neural engine.” People might also feel more secure knowing that their voice recordings aren’t being sent to unseen computers in faraway places.

Google actually led the way with on-phone processing: In 2019, it introduced a Pixel phone that could transcribe speech to text and perform other tasks without any connection to the cloud. One reason Google decided to build its own phones was that the company saw potential in creating custom hardware tailor-made to run AI, says Brian Rakowski, product manager of the Pixel group at Google.

These so-called edge devices can be pretty much anything with a microchip and some memory, but they tend to be the newest and most sophisticated of smartphones, automobiles, drones, home appliances, and industrial sensors and actuators. Edge AI has the potential to deliver on some of the long-delayed promises of AI, like more responsive smart assistants, better automotive safety systems, new kinds of robots, even autonomous military machines.

The challenges of making AI work at the edge—that is, making it reliable enough to do its job and then justifying the additional complexity and expense of putting it in our devices—are monumental. Existing AI can be inflexible, easily fooled, unreliable and biased. In the cloud, it can be trained on the fly to get better—think about how Alexa improves over time. When it’s in a device, it must come pre-trained, and be updated periodically. Yet the improvements in chip technology in recent years have made it possible for real breakthroughs in how we experience AI, and the commercial demand for this sort of functionality is high.

From swords to plowshares

Shield AI, a contractor for the Department of Defense, has put a great deal of AI into quadcopter-style drones which have already carried out—and continue to be used in—real-world combat missions. One mission is to help soldiers scan for enemy combatants in buildings that must be cleared. The DoD has been eager to use the company’s drones, says Shield AI’s co-founder, Brandon Tseng, because even if they fail, they can be used to reduce human casualties.

“In 2016 and early 2017, we had early prototypes with something like 75% reliability, something you would never take to market, and the DoD were saying, ‘We’ll take that overseas and use that in combat right now,’” Mr. Tseng says. When he protested that the system wasn’t ready, the response from within the military was that anything was better than soldiers going through a door and being shot.

In a combat zone, you can’t count on a fast, robust, wireless cloud connection, especially now that enemies often jam wireless communication and GPS signals. When on a mission, processing and image recognition must occur on the company’s drones themselves.

Shield AI uses a small, efficient computer made by Nvidia, designed for running AI on devices, to create a quadcopter drone no bigger than a typical camera-wielding consumer model. The Nova 2 can fly long enough to enter a building, and use AI to recognize and examine dozens of hallways, stairwells and rooms, cataloging objects and people it sees along its way.

Meanwhile, in the town of Salinas, Calif., birthplace of “Grapes of Wrath” author John Steinbeck and an agricultural center to this day, a robot the size of an SUV is spending this year’s growing season raking the earth with its 12 robotic arms. Made by FarmWise Labs Inc., the robot trundles along fields of celery as if it were any other tractor. Underneath its metal shroud, it uses computer vision and an edge AI system to decide, in less than a second, whether a plant is a food crop or a weed, and directs its plow-like claws to avoid or eradicate the plant accordingly.

FarmWise’s huge, diesel robo-weeder can generate its own electricity, enabling it to carry a veritable supercomputer’s worth of processing power—four GPUs and 16 CPUs which together draw 500 watts of electricity.

In our everyday lives, things like voice transcription that work whether or not we have a connection, or how good it is, could mean shifts in how we prefer to interact with our mobile devices. Getting always-available voice transcription to work on Google’s Pixel phone “required a lot of breakthroughs to run on the phone as well as it runs on a remote server,” says Mr. Rakowski.

Google has almost unlimited resources to experiment with AI in the cloud, but getting those same algorithms, for everything from voice transcription and power management to real-time translation and image processing, to work on phones required the introduction of custom microprocessors like the Pixel Neural Core, adds Mr. Rakowski.

Turning cats into pure math

What nearly all edge AI systems have in common is that, as pre-trained AI, they are only performing “inference,” says Dennis Laudick, vice president of marketing for AI and machine learning at Arm Holdings, which licenses chip designs and instructions to companies such as Apple, Samsung, Qualcomm, Nvidia and others.

Generally speaking, machine-learning AI consists of four phases:

  • Data is captured or collected: Say, for example, in the form of millions of cat pictures.
  • Humans label the data: Yes, these are cat photos.
  • AI is trained with the labelled data: This process selects for models that identify cats.
  • Then the resulting pile of code is turned into an algorithm and implemented in software: Here’s a camera app for cat lovers!

(Note: If this doesn’t exist yet, consider it your million-dollar idea of the day.)

The last bit of the process—something like that cat-identifying software—is the inference phase. The software on many smart surveillance cameras, for example, is performing inference, says Eric Goodness, a research vice president at technology-consulting firm Gartner. These systems can already identify how many patrons are in the restaurant, if any are engaging in undesirable behaviour, or if the fries have been in the fryer too long.

It’s all just mathematical functions, ones so complicated that it would take a monumental effort by humans to write them, but which machine-learning systems can create when trained on enough data.

Robot pratfalls

While all of this technology has enormous promise, making AI work on individual devices, whether or not they can connect to the cloud, comes with a daunting set of challenges, says Elisa Bertino, a professor of computer science at Purdue University.

Modern AI, which is primarily used to recognize patterns, can have difficulty coping with inputs outside of the data it was trained on. Operating in the real world only makes it tougher—just consider the classic example of a Tesla that brakes when it sees a stop sign on a billboard.

To make edge AI systems more competent, one edge device might gather some data but then pair with another, more powerful device, which can integrate data from a variety of sensors, says Dr. Bertino. If you’re wearing a smartwatch with a heart-rate monitor, you’re already witnessing this: The watch’s edge AI pre-processes the weak signal of your heart rate, then passes that data to your smartphone, which can further analyze that data—whether or not it’s connected to the internet.

The overwhelming majority of AI algorithms are still trained in the cloud. They can also be retrained using more or fresher data, which lets them continually improve. Down the road, says Mr. Goodness, edge AI systems will begin to learn on their own—that is, they’ll become powerful enough to move beyond inference and actually gather data and use it to train their own algorithms.

AI that can learn all by itself, without connection to a cloud superintelligence, might eventually raise legal and ethical challenges. How can a company certify an algorithm that’s been off evolving in the real world for years after its initial release, asks Dr. Bertino. And in future wars, who will be willing to let their robots decide when to pull the trigger? Whoever does might end up with an advantage—but also all the collateral damage that happens when, inevitably, AI makes mistakes.



MOST POPULAR
11 ACRES ROAD, KELLYVILLE, NSW

This stylish family home combines a classic palette and finishes with a flexible floorplan

35 North Street Windsor

Just 55 minutes from Sydney, make this your creative getaway located in the majestic Hawkesbury region.

Related Stories
Lifestyle
Lamborghini’s Urus SUV Plug-In Hybrid Will Be Available Early Next Year
By Jim Motavalli 02/05/2024
Lifestyle
To Sleep Better, Change What—and When—You Eat
By ELIZABETH BERNSTEIN 01/05/2024
Shutterstock
Property
10 Things That Will Instantly Add Value to Your Property
By Josh Bozin 30/04/2024
Lamborghini’s Urus SUV Plug-In Hybrid Will Be Available Early Next Year
By Jim Motavalli
Thu, May 2, 2024 4 min

The marketplace has spoken and, at least for now, it’s showing preference for hybrids and plug-in hybrids (PHEVs) over battery electrics. That makes Toyota’s foot dragging on EVs (and full speed ahead on hybrids) look fairly wise, though the timeline along a bumpy road still gets us to full electrification by 2035.

Italian supercar producer Lamborghini, in business since 1963, is also proceeding, incrementally, toward battery power. In an interview, Federico Foschini , Lamborghini’s chief global marketing and sales officer, talked about the new Urus SE plug-in hybrid the company showed at its lounge in New York on Monday.

The Urus SE interior gets a larger centre screen and other updates.
Lamborghini

The Urus SE SUV will sell for US$258,000 in the U.S. (the company’s biggest market) when it goes on sale internationally in the first quarter of 2025, Foschini says.

“We’re using the contribution from the electric motor and battery to not only lower emissions but also to boost performance,” he says. “Next year, all three of our models [the others are the Revuelto, a PHEV from launch, and the continuation of the Huracán] will be available as PHEVs.”

The Euro-spec Urus SE will have a stated 37 miles of electric-only range, thanks to a 192-horsepower electric motor and a 25.9-kilowatt-hour battery, but that distance will probably be less in stricter U.S. federal testing. In electric mode, the SE can reach 81 miles per hour. With the 4-litre 620-horsepower twin-turbo V8 engine engaged, the picture is quite different. With 789 horsepower and 701 pound-feet of torque on tap, the SE—as big as it is—can reach 62 mph in 3.4 seconds and attain 193 mph. It’s marginally faster than the Urus S, but also slightly under the cutting-edge Urus Performante model. Lamborghini says the SE reduces emissions by 80% compared to a standard Urus.

Lamborghini’s Urus plans are a little complicated. The company’s order books are full through 2025, but after that it plans to ditch the S and Performante models and produce only the SE. That’s only for a year, however, because the all-electric Urus should arrive by 2029.

Lamborghini’s Federico Foschini with the Urus SE in New York.
Lamborghini

Thanks to the electric motor, the Urus SE offers all-wheel drive. The motor is situated inside the eight-speed automatic transmission, and it acts as a booster for the V8 but it can also drive the wheels on its own. The electric torque-vectoring system distributes power to the wheels that need it for improved cornering. The Urus SE has six driving modes, with variations that give a total of 11 performance options. There are carbon ceramic brakes front and rear.

To distinguish it, the Urus SE gets a new “floating” hood design and a new grille, headlights with matrix LED technology and a new lighting signature, and a redesigned bumper. There are more than 100 bodywork styling options, and 47 interior color combinations, with four embroidery types. The rear liftgate has also been restyled, with lights that connect the tail light clusters. The rear diffuser was redesigned to give 35% more downforce (compared to the Urus S) and keep the car on the road.

The Urus represents about 60% of U.S. Lamborghini sales, Foschini says, and in the early years 80% of buyers were new to the brand. Now it’s down to 70%because, as Foschini says, some happy Urus owners have upgraded to the Performante model. Lamborghini sold 3,000 cars last year in the U.S., where it has 44 dealers. Global sales were 10,112, the first time the marque went into five figures.

The average Urus buyer is 45 years old, though it’s 10 years younger in China and 10 years older in Japan. Only 10% are women, though that percentage is increasing.

“The customer base is widening, thanks to the broad appeal of the Urus—it’s a very usable car,” Foschini says. “The new buyers are successful in business, appreciate the technology, the performance, the unconventional design, and the fun-to-drive nature of the Urus.”

Maserati has two SUVs in its lineup, the Levante and the smaller Grecale. But Foschini says Lamborghini has no such plans. “A smaller SUV is not consistent with the positioning of our brand,” he says. “It’s not what we need in our portfolio now.”

It’s unclear exactly when Lamborghini will become an all-battery-electric brand. Foschini says that the Italian automaker is working with Volkswagen Group partner Porsche on e-fuel, synthetic and renewably made gasoline that could presumably extend the brand’s internal-combustion identity. But now, e-fuel is very expensive to make as it relies on wind power and captured carbon dioxide.

During Monterey Car Week in 2023, Lamborghini showed the Lanzador , a 2+2 electric concept car with high ground clearance that is headed for production. “This is the right electric vehicle for us,” Foschini says. “And the production version will look better than the concept.” The Lanzador, Lamborghini’s fourth model, should arrive in 2028.

MOST POPULAR

Consumers are going to gravitate toward applications powered by the buzzy new technology, analyst Michael Wolf predicts

11 ACRES ROAD, KELLYVILLE, NSW

This stylish family home combines a classic palette and finishes with a flexible floorplan

Related Stories
Money
The Long Goodbye: Why Laid-Off Employees Are Still on the Job
By TE-PING CHEN and Lindsay Ellis 12/12/2023
Property
Luxury Rents Around the World Rose Faster Than Sales Prices 2023
By CASEY FARMER 08/02/2024
Property
A Megamansion in Dubai’s Swanky Emirates Hills Community Sells for $40.2 Million
By LIZ LUCKING 09/04/2024
0
    Your Cart
    Your cart is emptyReturn to Shop