Observations & Opportunities: Autonomous Vehicles (AVs) Powered by AI—What Could Possibly Go Wrong?

This time, the AV discussion continues with some general and specific Artificial Intelligence (AI also known as Machine Intelligence, MI) downsides, challenges and tips. Autonomy, safety and efficiency are obvious AI goals for AVs. AI is a tool ranging from simple to complex. If AI is implemented with a thorough understanding of the specific goals to accomplish, it may work very well and exceed expectations. If not, watch the headlines.

AI is the fast-evolving foundation for many new technology efforts now including self-driving cars, a.k.a., AVs. There are as many different approaches to AI for AVs as there are companies working on them and the AVs themselves. Automakers, with their own systems, or those that are evaluating the alternatives from Waymo by Google (Alphabet), Apple, or others, will cull the field at some point. Some systems will be too expensive and others may rely upon unproven technologies that will turn off auto manufacturers. Some standardization is necessary too for regulation and insurance purposes. AV appearance will be a factor since some of the roof-mounted sensor arrays are just plain ugly and need cloaking devices. Some of the non-aerodynamic sensor arrays on roofs would increase noise and decrease mileage.

Waymo’s Chrysler Pacifica Hybrid minivan AV testbed

As Waymo points out, the world had 1.25 million deaths worldwide due to vehicle crashes in 2014. There were 32,675 deaths in the U.S. alone due to vehicle crashes in 2014. And there was a 6% increase in traffic fatalities in 2016, reaching the highest point in nearly a decade. The primary cause: “There’s a clear theme to the vast majority of these incidents: human error and inattention.” Safety is a primary Waymo goal.

Apple’s AI for AV efforts are usually somewhat secret. However, they recently posted their Apple Machine Learning Journal online. It opens with “Welcome to the Apple Machine Learning Journal. Here, you can read posts written by Apple engineers about their work using machine learning technologies to help build innovative products for millions of people around the world. If you’re a machine learning researcher or student, an engineer or developer, we’d love to hear your questions and feedback.” It provides some insights on what they are doing.

How much AI is used in products or services including AVs varies enormously but it is the elephant in the room for all “smart” products whether it be your smartphone, tablet, car or thermostat. Like them or not, AI-enabled products and services can turn up anywhere and there will be some inevitable marketing hyperbole about new “AI” products. AI-enabled things may be smarter than they were but they still lack common sense and often adequate security.

A recent publicized fender-bender accident involving a newly deployed AV bus suggests that an AV should have enough latitude to back up, if no one is endangered, when an obvious threat is posed by another vehicle backing up towards it. Of course, some human drivers would have not taken evasive action either so blaming AI for lack of foresight or adequate programming is really blaming humans too.

AI is, and likely will be for some time, a blend of hardwired programming for following rules of the road as well as having a ML (machine learning) capability to avoid repeating mistakes and learning how to perform better. There certainly are differing approaches for an AV’s operating system (OS) depending, in part, on whether it was conceived by an old-line auto company or an new entry such as Apple or Waymo (Alphabet/Google).

How Extensive Is AV and AI?

One insightful recent report proclaimed, “There are basically three big questions about artificial intelligence and its impact on the economy: What can it do? Where is it headed? And how fast will it spread?

“Three new reports combine to suggest these answers: It can probably do less right now than you think. But it will eventually do more than you probably think, in more places than you probably think, and will probably evolve faster than powerful technologies have in the past.” This suggests that robust global AI efforts are not only underway but gaining momentum.

As a disclaimer, most of the companies describing AI advancements now do so relative to their own efforts. A dispassionate comprehensive overview of AI can be found online by visiting the Artificial Intelligence Index and reading their AI Index 2017 Annual Report. For those considering or upgrading AI in their products or services, the report is very revealing and allows one to avoid some common pitfalls.

AI, AV and EV Futures: Defined by the Giants

In AI, the really large technology leaders and their competitors are battling it out in the AI marketplace. It is a battle where substantial financial resources and engineering talent help carve out significant market shares. So, are these large corporations too influential?

“We are experiencing a time, where five companies are holding most of the economical (and even political) power in the world: Facebook, Alphabet, Amazon, Apple and Microsoft” surfaced in a recent blog. Alphabet (Google and Waymo), Tesla (SpaceX and the Hyperloop Pod transport-in-a-vacuum system), Uber and Apple are certainly active in AI for AVs.

Some AI efforts may require more than what those companies alone can accomplish alone or with a few partners. If one has any doubt that AI, or AI in AVs, is real, just check out the news and recent technical papers presented by these companies and their automobile manufacturing competitors around the world. Of course, the automobile manufacturers are very active which is no surprise. Don’t overlook the universities and R&D centers are doing AI and AV research.

Many of the biggest companies involved with AI projects had modest efforts until the past few years, the exception being Waymo. AI and its impact on AVs, EVs (Electric Vehicles) and other products and services were modest but with some very promising results. Some of the deep-pocket companies were busy with projects that were not yet public knowledge. However, with AVs, General Motors just announced that they are “all in” with the AV and EV businesses going forward.

G.M. produced their first round of AV self-driving Chevrolet Bolt cars at their GM Orion Assembly plant in Orion Township, Michigan. G.M.’s president, Daniel Ammann, told journalists that the cars would be ready for consumer applications in “quarters, not years.”

G.M. also plans to build a driverless ride-hailing fleet by 2019 that may eventually become part of the automaker’s core business. [G.M. the taxi company?] Executives told investors that by 2025, AV cost reductions and increased consumer purchases should combine and drive prices down to less than $1/mile, or about a third of current ride-hailing prices.

G.M.’s Chevrolet Bolt EV is the first U.S.-made, mass-market, fully electric car

“We’re aiming for a future of zero crashes, zero emissions and zero congestion,” G.M. CEO Mary Barra said.

Aside from the AV future, G.M. executives said the company sees a clear path to profitability through a wide array of EV electric cars, which so far have yet to take hold with consumers due to a combination of cost and range anxiety but the latter may be reduced as battery technology is improving.

Innovations Welcome

The current breakthroughs in commercializing AI, AVs, EVs and related technologies will provide opportunities for those with innovative ideas. With some companies now offering do-it-yourself (DIY) ML (Machine Learning) apps, it may get much more interesting quickly.

The AI genie is out of the bottle. Governments, corporations, schools and individuals certainly have increasing access to AI technologies. One major question is whether or not they will make informed decisions regarding selections, implementations and usage. The general public does not have an in-depth knowledge of what is being planned or secretly deployed whether it be for the greater good or for nefarious purposes. However, the better that AI is understood by everyone, the more likely consensus positive outcomes will be implemented.

It Is AV Rocket Science

Yes, space launch rocket systems can be AVs too. Elon Musk, of Tesla automotive fame, sees his impressive SpaceX space launch venture on a roll. Their satellite launches in 2017, to date, are impressive and are leading the way to more affordable launches among global space launch providers. It’s nice to see a private company succeed in an area long dominated by government or existing aerospace companies. Do reusable SpaceX rockets use AI? Yes. In fact, AI was the tool that allowed analysis of the key engineering challenges that make landing huge rockets possible.

No entity had seriously studied re-entry and landing the first stages of the rockets until recently. Companies and governments had just figured disposable rockets were the cost of doing business since everyone used them. Even the now defunct U.S. Space Shuttle’s original reusability goals faded over the years for numerous reasons.

A Falcon 9 first stage successfully landed on SpaceX’s autonomous spaceport drone ship in the Atlantic Ocean. This first stage was reflown in March 2017, the first-ever reflight of an orbital-class rocket stage.

Musk is certainly not alone in the private space launch business, other tech billionaires including Paul Allen [co-founded Microsoft with Bill Gates, and founder of the Allen Institute for Artificial Intelligence and Stratolaunch Systems, a space transportation venture using an air launch to orbit system] and Jeff Bezos (Amazon and Blue Origin spacecraft founder) may be enabling the new age in space launch efficiency and affordability. These new “kids” are so good that even the old Russian standby Soyuz team is scrambling to rework their business to compete.

The Soyuz spacecraft was designed for the Soviet space program by the Korolev Design Bureau (now RKK Energia) in the 1960s. The Soyuz was originally part of the unrealized Soviet manned lunar manned landing program. The Soyuz capsule launches on a Soyuz rocket, a frequently used and very reliable launch vehicle. The Soyuz rocket is based on the early Vostok launcher, which in turn was based on the 8K74 (R-7A Semyorka) Soviet-era ICBM.

According to one recent report, “This year [2017] has seen a number of firsts for the [SpaceX]—first reflight of a Falcon 9 booster, first reuse of a Dragon cargo spacecraft, first national security payload, and a remarkable dozen landings. But probably the biggest achievement has been finally delivering on the promise of a high flight rate.”

Yes, these entrepreneurs are using AI to fine-tune their space transportation operations as well as their other commercial enterprises. Yes, AI works.

Using AI for AVs & Everything Else

Although science and engineering innovations are built upon the accomplishments of the past, fresh eyes from younger visionaries and the curious may shake up industry and business going forward. When the the young ask “Why?”, or see something that looks overly complicated, fresh insights may lead to greater efficiency and productivity.

Musk is both a user and promoter of AI. “Since its founding by Elon Musk and others nearly two years ago, nonprofit research lab OpenAI has published dozens of research papers. One posted online … is very different: Its lead author is still in high school.

“The wunderkind is Kevin Frans, a senior currently working on his college applications. He trained his first neural net—the kind of system that tech giants use to recognize your voice or face—two years ago, at the age of 15. Inspired by reports of software mastering Atari games and the board game Go, he has since been reading research papers and building pieces of what they described. “I like how you can get computers to do things that previously you would think were impossible.”

Frans was working on a tricky problem that was holding back robots and other AI ML systems, i..e., “How can machines tap what they’ve previously learned to solve new problems?”

Humans do this instinctively but machine-learning (ML) software typically repeats its lengthy training process for every new problem—even when there are common elements.

Frans’s paper, with four others affiliated with the University of California Berkeley, reports progress on this problem. “If it could get solved it could be a really big deal for robotics but also other elements of AI,” Frans says. He developed an algorithm that helped virtual legged robots learn which limb movements could be applied to multiple tasks, such as walking and crawling. In tests, it helped virtual robots with two and four legs adapt to new tasks, including navigating mazes, more quickly. Fresh ideas help.

Next time: AI-driven AV technology will certainly reshape the global automobile manufacturing industry and its suppliers. Manufacturing innovations are likely that will reshape the supply chains and manufacturing landscape. A look at some possibilities and additional concerns from both automakers and tech entrepreneurs will provide additional AI and AV insights.

Global AV & Advanced Driver Assistance Efforts Gain Momentum

Fully AVs (Autonomous Vehicles, a.k.a., Driverless Vehicles and ADS/ADAS advanced driver assistance systems) are still works in progress with different global proponents pushing competing technologies and strategies. Technical hurdles certainly remain as do legislative agendas to control the AVs or near-AV ADS vehicles already roaming some of the streets worldwide in tests.

Until AVs are around in large numbers, however, we will likely see more assisted driving ADS vehicles that require the driver to pay attention and intervene as needed per the vehicle’s rules. For ADS, some carmakers are looking at implementing head-up displays (HUDs) on the windshields, much like some military fighter and commercial aircraft pilots have had for some time. This permits critical information such as speed or warnings to be displayed without drivers taking their eyes off the roads and nearby traffic. Unlike military or commercial aircraft applications, HUD costs will certainly be major considerations in vehicles. Of course HUDs could evolve much like TVs where the old grainy pictures of the past have evolved into affordable HDTV and 4K HDTVs—incredible resolution at prices that keep dropping.

Caption: The NXP S32 Auto Processing Platform

 

NXP Semiconductors N.V., one of the world’s largest suppliers of automotive semiconductors, has a control and computer system for connected, electric and autonomous cars. The NXP S32 platform claims to be the world’s first fully-scalable automotive computing architecture. Soon to be adopted by some premium and volume automotive brands, it offers a unified architecture of microcontrollers/microprocessors (MCUs/MPUs) and an identical software environment across application platforms.

This system addresses the challenges of future car development with an architecture that allows carmakers to provide custom in-vehicle experiences while bringing automated driving functions to market much faster than before.

“Traditional and disruptive automakers, even more than Tier ones, seek a standardized way of working across vehicle domains, segments and regions to meet increasing performance demands while contemporarily ensuring fast time to market and control over skyrocketing development costs,” said Luca DeAmbroggi, senior principal analyst, Automotive Electronics & Semiconductors at IHS Markit in London. “A common architecture and a scalable approach can cut development time for critical applications in domains like ADAS, autonomous driving or connectivity from both the HW and the SW perspective.”

Vehicle Automation—AI Enabled

AI itself is getting much smarter as a recent MIT Technology Review news item explained. “AlphaGo Zero Shows Machines Can Become Superhuman Without Any Help. It explains that AlphaGo wasn’t the best Go player on the planet for very long. A new version of the masterful AI program has emerged, and it’s a monster. In a head-to-head matchup, AlphaGo Zero defeated the original program by 100 games to none.

“What’s really cool is how AlphaGo Zero did it. Whereas the original AlphaGo learned by ingesting data from hundreds of thousands of games played by human experts, AlphaGo Zero, also developed by the Alphabet subsidiary DeepMind, started with nothing but a blank board and the rules of the game. It learned simply by playing millions of games against itself, using what it learned in each game to improve.

“The new program represents a step forward in the quest to build machines that are truly intelligent. That’s because machines will need to figure out solutions to difficult problems even when there isn’t a large amount of training data to learn from,” per the Technology Review description.

Machines that are aware and learn are not new but this iteration is very impressive. Since AI and Machine Learning (ML) are essential for advanced AVs, this may accelerate their mastering the challenges and produce more road-worth systems soon.

Massive AV and AI Investments

What casual observers may fail to realize is just how many huge corporations are making multibillion dollar investments to get an early adopter advantage. Every major automaker needs AVs. All automakers worldwide are looking for what will work plus the tech firms that are hoping to lock in the next Windows, macOS/iOS, Linux OS or other competitive computer systems for these vehicles.

We may well see a mix of systems as we do with computers now but all must comply with driving rules and regulations—current and planned. There are certainly considerably different driving rules in various countries so one system may not work everywhere.

Large countries, China for instance, has terrible air pollution problems caused primarily by automobiles and power plants. China is really pushing photovoltaic solar panel production and implementation for several reasons: it is a clean non-polluting energy source and their manufacturing and installation expertise opens worldwide market opportunities.

China is making a big push towards electric vehicles (EVs) that will use photovoltaic recharging. If you are already going EV, why not use this transition as the ideal time to automate traffic flow and eliminate the problems of pollution, congestion, gridlock and accidents at the same time?

How serious is China about AI? The country has a development plan to become the world leader in AI by 2030, aiming to surpass its rivals technologically and build a domestic industry worth almost $150 billion.

Smart Connected Cars (SCCs)

NXP Semiconductors, Eindhoven, the Netherlands, and the China Ministry of Industry and Information Technology’s Subsidiary CAICT in Shenzhen recently signed a strategic cooperation agreement on Smart Connected Cars (SCCs). NXP was granted Official Pilot Company Status for Intelligent Transportation and Securely Connected Vehicles in China. CAICT is a subsidiary of China’s Ministry of Industry and Information Technology (MIIT) and the agreement will foster innovation in intelligent transportation and securely connected vehicles. CAICT was appointed by MIIT as project lead for the “Sino-German Intelligent Manufacturing Cooperation program” in 2016.

The CAICT-NXP partnership is focusing on strategic research and development, the development of standards, quality and testing, and talent exchange. It aims to advance China’s car industry with secure connectivity and infrastructure solutions, such as vehicle-to-vehicle and vehicle-to-infrastructure communications for smarter traffic.

The two parties will work on state-of-the-art networking technology and product development, while jointly promoting international standards across automotive applications such as information service terminals, vehicle-to-vehicle communications and vehicle-to-infrastructure communications and other automotive networking applications.

“Every year, 1.3 million people die in road accidents around the globe. The implementation of V2X and other intelligent transport systems will significantly reduce accidents, hours spent in traffic jams and CO2 emissions in China. However, safe and secure mobility can only come to life if there’s a commitment to collaboration. NXP is honored to be appointed as official collaboration company in the pilots, jointly working on this high-impact societal change. We look forward to supporting the transformation of the Chinese automobile industry by providing advanced secure connection and infrastructure solutions for a smarter life,” said Kurt Sievers, NXP Executive Vice President and General Manager of BU Automotive.

Next time: The AV discussion continues with some AI downsides.

Sputter Deposition of Thin Films: Introduction

Sputtering, in its simplest form, is the ejection of atoms by the bombardment of a solid or liquid target by energetic particles, mostly ions. It results from collisions between the incident energetic particles, and/or resultant recoil atoms, with surface atoms. One of the major advantages of this process is that sputter-ejected atoms have kinetic energies significantly larger than evaporated materials. The growing film is subjected to a number of energetic species from the plasma. Figure 1 shows a comparison of the energetics of thermal evaporation and sputtering processes for Cu [1]. Figure 2 shows the general sputtering process. In this process, atoms or molecules of a solid material (a target or sputtering source) are ejected into a gas form or plasma due to bombardment of the material by energetic gas ions and deposited on a substrate above or to the side of the target. A vacuum is required to initiate a plasma whose ions bombard the target. The sputtering process is essentially a momentum exchange, shown in Figure 3 [2], between the gas ions; the more intense and concentrated the plasma in the region of the target, the higher the atom removal rate (or deposition rate). The number of atoms ejected per incident ion is called the sputtering yield, and is dependent on the energy of the incident ion. A measure of the removal rate of surface atoms is the sputter yield Y, defined as the ratio between the number of sputter ejected atoms and the number of incident projectiles. The mass of the ion is important compared to the mass of the atoms in the target. Momentum transfer is more efficient if the masses are similar. Inert gases are used to generate the plasma. The most common (and cheapest) sputtering gas is argon (Ar), followed by krypton (Kr), xenon (Xe), neon (Ne), and nitrogen (N2). There will be negligible sputtering with light atomic weight gases such as hydrogen and helium. Reactive gases typically used are oxygen (O2), nitrogen, fluorine (F), hydrogen (H2), and hydrocarbons (methane, butane, etc.).

Figure 1. Comparison of thermal energy distributions for Cu evaporated at 1300 K and energy distribution of sputtered Cu [1]

The major sputtering techniques are diode, planar magnetron, cylindrical magnetron, high power impulse magnetron, and ion beam sputtering. All these methods have a number of variations. Diode sputtering, shown in Figure 2, is the simplest configuration of this family. Both RF (poorly conductive targets) and DC (conductive targets) power is applied to the sputtering target. This type of sputtering is typically performed at higher chamber pressures than its cousins, 0.5 to 10 Pa (5 X 10-3 – 0.1 torr). Substrate heating is often required to obtain high quality adherent films. RF sputtering has the advantage of higher deposition rates and a wider range of materials that can be deposited. Both methods, however, can be used for reactive deposition. As with all sputtering processes, deposition rate depends on a number of factors, such as chamber pressure, power to the target and substrate target spacing.

Figure 2. Diagram of the sputtering process.

 

Figure 3. Diagram showing momentum exchange in the target during the sputtering process [2].

As the name implies, planar magnetron sputtering in its simplest form utilizes a flat sputtering target in a cathode enclosure. Magnetron targets can be as small as 1 inch or as large as several meters. Figure 4 shows the geometry of a basic planar magnetron cathode. Magnets (called magnetics) are placed under the target in various configurations to confine the plasma or spread the plasma above the region of the target. The magnetic lines of force focus the charged gas atoms (ions). The stronger the magnetic field, the more confined the plasma will be (consequences are discussed below). The magnetics focus the plasma at the surface of the target and an erosion pattern, commonly called a “racetrack”, is formed as sputtering proceeds. Magnet configuration is different for planar circular, planar rectangular, external and internal cylindrical magnetrons. Figure 5 shows an erosion pattern of a conventional planar magnetron cathode. This pattern is narrow and materials usage was poor, but second and third generation magnetrons use target material much more efficiently. Target utilization of cylindrical magnetrons can be as high as 80%. Other designs such as a rotating circular magnetrons and full face erosion magnetrons also have high target utilization.

Figure 4. Magnet geometry in a planar magnetron cathode.

 

Figure 5. Erosion racetrack in a magnetron sputtering target.

 

The advantages of magnetron sputtering are:

  • Wide range of materials can be deposited
  • DC and RF reactive processes possible
  • Lower chamber pressures
  • Higher deposition rates than diode sputtering processes
  • Dense, high quality thin films
  • Large areas can be covered
  • Good thickness uniformity
  • Substrate heating not required in many cases
  • Amenable to a wide range of substrates, including plastics
  • Deposition conditions easily controlled
  • Several cathode configurations possible

 

Disadvantages are:

  • Poor materials usage (this is improving with modern cathode designs)
  • Lower deposition rates than electron beam evaporation, ion beam sputtering, cathodic arc and CVD processes

 

The next Blog will address other magnetron designs and applications.

 

Reference:

  1. Handbook of Deposition Technologies for Films and Coatings, 3rd Ed., P M Martin Ed., Elsevier (2009).
  2. Handbook of Deposition Technologies for Films and Coatings, R. Bunshaw Ed., Noyes (1994).

Observations & Opportunities: Self-Driving Vehicles—Challenges Abound

Remember when you used the rearview mirror to back into a parking space? Some of us still do but many vehicles now include, or provide as an option, automated parking driving aids. In practice, it conveniently works with another system that avoids backing over unsuspecting pedestrians or hitting inanimate objects (the increasingly common AI-based automated braking systems).

This is just one example of what basic driver-assistance systems can do now. Yes, some consumers are still skeptical of fully AVs (autonomous vehicles)/ADS/ADAS (advanced driver assistance systems). They question their practicality but also like the idea of being able to do other things to pass the time while stalled in traffic congestion. The many heavyweight companies developing AVs are counting on the latter. The expected decrease in traffic accidents and minimized congestion are benefits that insurance companies and governments applaud.

To counter the increasingly common dashboard infotainment systems, and all the mobile devices including laptops for consuming media and information brought onboard, and seemingly ubiquitous smartphones, that dangerously distract drivers now, AVs that could safely transport distracted occupants should become more popular and better for all concerned.

Perhaps the practical limits of AV system implementation are “driver” and passenger sensory overload plus the thought of simply not being in control. Also, significant AV system costs and technology maturity are concerns. Even when AV systems become commonplace, the cost of the components alone will be substantial. As Tesla Inc. indicated recently, the cost differential between assisted driving and full AV capabilities is several thousand dollars. What all of these various systems use is dependent, to varying degrees, upon affordable and reliable artificial intelligence (AI) implementation with associated sensors and actuators. [Also see earlier blogs on AI & Virtual Reality—Let The Games (and Work) Begin and AI & Autonomous Vehicles—Awesome Implications]

Some AV Reality Checks

At this point in time, it would be difficult to find an automobile manufacturer that is not actively working on integrating sophisticated autonomous vehicle operation in some, or all, of its vehicles. At the same time, these manufacturers are increasingly focusing on electric vehicles (EVs) or hybrid gasoline-electric vehicles. In a decade, internal-combustion engine vehicles will be disappearing.

Since many “technology” companies regard the automobile industry as lacking in computer, microelectronics and software expertise, the established computer hardware and software companies are either developing AV systems independently or in joint ventures with familiar automotive companies. Alphabet’s Google, Apple, Intel, Microsoft, and others are very active with massive ongoing investments. Recently, NVIDIA Corp. has been releasing AI chips specifically for AV applications with impressive early evaluations. Realistically, not all will be successfully adopted but differing AV systems will eventually enter mass production.

General Motors Co. recently acquired the LIDAR technology company Strobe, Inc. As part of the deal, Strobe’s engineering talent joins GM’s Cruise Automation team to define and develop next-generation LIDAR solutions for self-driving vehicles. GM thinks its AV team is complete now.

Lidar (also called LIDAR, LiDAR, and LADAR) originally was a surveying technology that measured distance by illuminating a target with a pulsed laser light and measuring the reflected pulses with a sensor. Differences in laser return times and wavelengths made digital 3D-representations of the target.

LIDAR is said to create higher-resolution images that provide more accurate views of the world than cameras or radar alone. As self-driving technology continues to evolve, LIDAR’s accuracy may play a critical role in its deployment.

“The successful deployment of self-driving vehicles will be highly dependent on the availability of LIDAR sensors,” said Julie Schoenfeld, Founder and CEO, Strobe, Inc.

GM is planning to test its vehicles in “fully autonomous mode” in New York state in early 2018, according to New York Governor Andrew Cuomo. However, the planned testing by GM and its self-driving unit, Cruise Automation, will initially be a Level 4 autonomous vehicle.

A level 3 car still needs a steering wheel and a driver who can take over if the car encounters a problem, while level 4 promises driverless features in dedicated lanes. A level 5 vehicle is capable of navigating roads without any driver input and in its purest form would have no steering wheel or brake pedal.

General Motors’ Cruise Automation LIDAR on a Chevrolet Bolt EV (electric vehicle)

 

Caveats

The many AV systems use very sophisticated sensors to evaluate the immediate surroundings that vehicle owners, or automobile mechanics, may not fully understand. Looking at AV going forward, the mainstream vacuum-dependent deposition and etching/cleaning processes will likely need refinements and evolution. Unlike a PC or smartphone problem where failure may be annoying, if a driverless vehicle goes awry because of a microprocessor or sensor component manufacturing defect, the consequences can be tragic and very expensive.

Those using the ICs, MEMS and other micro devices for AI and AV have at least two options to minimize liability, build in considerable redundancy or use fail-safe components and systems. With IC manufacturing, although each device on a wafer may be very similar, they are not absolutely identical. That means that the patterning lithography must be nearly perfect and each layer of material deposited must be incredibly uniform. What was acceptable yesterday may not be good enough going forward.

AI, in the most basic terms, suggests that its (dedicated) computer is cognizant of the specified environment (using sensors) and performs tasks within its defined decision-making capabilities (using actuators). In a vehicle, AV control is tied into an already extremely complex electrical/electronic system with its own central computer and with dedicated embedded computers within specific components—a complex network.

“Artificial intelligence (AI, also machine intelligence, MI) is apparently intelligent behavior by machines, rather than the natural intelligence (NI) of humans and other animals. In computer science AI research is defined as the study of “intelligent agents”: any device that perceives its environment and takes actions that maximize its chance of success at some goal. Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving” per Wikipedia.

“Intelligent agents are often described schematically as an abstract functional system similar to a computer program. For this reason, intelligent agents are sometimes called abstract intelligent agents (AIA) to distinguish them from their real world implementations as computer systems, biological systems, or organizations.”

Simple reflex agent, Utkarshraj Atmaram, Wikipedia

 

A Broader Perspective of AI and AV

Microelectronics technology is invading many products and usually are interconnected via the Internet. If one looks at a self-driving car, i.e., will the passengers be viewing live TV?

Recently, Gartner, Inc., the Stamford, CT technology research company, highlighted the top strategic technology trends that will impact most organizations in 2018. Creating systems that learn, adapt and potentially act autonomously will be a major battleground for technology vendors through at least 2020. The ability to use AI to enhance decision making, reinvent business models and ecosystems, and remake the customer experience will drive the payoff for digital initiatives through 2025.

“Currently, the use of autonomous vehicles in controlled settings (for example, in farming and mining) is a rapidly growing area of intelligent things. We are likely to see examples of autonomous vehicles on limited, well-defined and controlled roadways by 2022, but general use of autonomous cars will likely require a person in the driver’s seat in case the technology should unexpectedly fail,” said David Cearley, vice president and Gartner Fellow in his Top 10 Strategic Technology Trends for 2018 presentation. “For at least the next five years, we expect that semiautonomous scenarios requiring a driver will dominate. During this time, manufacturers will test the technology more rigorously, and the nontechnology issues such as regulations, legal issues and cultural acceptance will be addressed.”

“AI techniques are evolving rapidly and organizations will need to invest significantly in skills, processes and tools to successfully exploit these techniques and build AI-enhanced systems,” added Mr. Cearley. “Investment areas can include data preparation, integration, algorithm and training methodology selection, and model creation. Multiple constituencies including data scientists, developers and business process owners will need to work together.”

The first three strategic technology trends explore how artificial intelligence (AI) and machine learning are seeping into virtually everything and represent a major battleground for technology providers over the next five years. The next four trends focus on blending the digital and physical worlds to create an immersive, digitally enhanced environment. The last three refer to exploiting connections between an expanding set of people and businesses, as well as devices, content and services to deliver digital business outcomes.

Cearley adds, “Intelligent things are physical things that go beyond the execution of rigid programming models to exploit AI to deliver advanced behaviors and interact more naturally with their surroundings and with people. AI is driving advances for new intelligent things (such as autonomous vehicles, robots and drones) and delivering enhanced capability to many existing things (such as Internet of Things [IoT] connected consumer and industrial systems).”

Next time: Examining widespread ADS/ADAS (advanced driver assistance systems) efforts worldwide continues.

Observations & Opportunities: AI & Autonomous Vehicles—Awesome Implications

Simply put, the business and technology implications for autonomous vehicles, or Automated Driving Systems (ADSs) per the U.S. Department of Transportation (DoT), for those making or using vacuum equipment and related materials, are enormous. These vehicles will require powerful and reliable ICs plus sophisticated MEMS sensors and actuators—all made with vacuum-centric technology to varying degrees.

Will ADSs be widely accepted and their use be on a massive scale? No one seems to know when it will happen on a large scale but some huge global companies are betting that it will be soon. The driving public may be skeptical of Autonomous Vehicles (AVs) at the moment but that was true with personal computers and smartphones too. Before the iPhone just a little over a decade ago, how many of us would have thought that today’s reliance on smartphones, laptops and other mobile devices would be so pervasive, even for TV experiences, on the smaller screens?

Many huge manufacturing and high-tech firms using contract manufacturers to build their products are involved that are not part the automobile industry itself. Insurance companies would really like ADS as would cities where gridlock traffic and accidents are a constant problem as are driver distractions. Adding some control to vehicles could help reduce accidents and traffic jams.

With President Trump proclaiming that massive infrastructure investment is needed, as have several previous presidents, it might actually happen this time. If roads are going to be rebuilt for the 21st Century, then we need smarter roads, bridges, etc. —computational devices, sensors, actuators, precise positioning information, and other technologies to make ADS vehicles safe and reliable. ADSs absolutely rely upon Artificial Intelligence (AI) and that is computational intensive. The value of electrical/electronics content in new vehicles now exceeds the mechanical component costs.

Autonomous Vehicles Defined
So what exactly is an autonomous vehicle or ADS?

“An autonomous car (also known as a driverless car, auto, self-driving car, robotic car) and Unmanned Ground Vehicle is a vehicle that is capable of sensing its environment and navigating without human input. Many such systems are evolving, but as of 2017 no cars permitted on public roads were fully autonomous. They all require a human at the wheel who must be ready to take control at any time.” per Wikipedia [https://en.wikipedia.org/wiki/Autonomous_car]. Intel Corp. uses automotive Advanced Driver Assistance Systems (ADAS) terminology.

Caption: Courtesy of Intel Corporation

 

Why ADS/ADAS?
With so many distracted drivers making your commute or everyday driving trips hazardous, perhaps ADSs does offer some compelling advantages worth consideration.

The U.S. DoT recently issued their “Automated Driving Systems (ADSs)” report. In it, Secretary Elaine L. Chao says, “Today, our country is on the verge of one of the most exciting and important innovations in transportation history— the development of Automated Driving Systems (ADSs), commonly referred to as automated or self-driving vehicles.

“The future of this new technology is so full of promise. It’s a future where vehicles increasingly help drivers avoid crashes. It’s a future where the time spent commuting is dramatically reduced, and where millions more—including the elderly and people with disabilities–gain access to the freedom of the open road. And, especially important, it’s a future where highway fatalities and injuries are significantly reduced.

“Since the Department of Transportation was established in 1966, there have been more than 2.2 million motor- vehicle-related fatalities in the United States. In addition, after decades of decline, motor vehicle fatalities spiked by more than 7.2 percent in 2015, the largest single-year increase since 1966. The major factor in 94 percent of all fatal crashes is human error. So ADSs have the potential to significantly reduce highway fatalities by addressing the root cause of these tragic crashes.”

A report from AAA reveals that the majority of U.S. drivers seek autonomous technologies in their next vehicle, but they continue to fear the fully self-driving car, so far. Despite the prospect that autonomous vehicles will be safer, more efficient and more convenient than their human-driven counterparts, three-quarters of U.S. drivers report feeling afraid to ride in a self-driving car, and only 10 percent report that they’d actually feel safer sharing the roads with driverless vehicles. As automakers press forward in the development of autonomous vehicles, AAA urges the gradual, safe introduction of these technologies to ensure that American drivers are informed, prepared and comfortable with this shift in mobility. [http://newsroom.aaa.com/2017/03/americans-feel-unsafe-sharing-road-fully-self-driving-cars/]

From a practical perspective, it will take a while to improve infrastructure that can accommodate ADSs and for the inevitable tweaking of the many ADS systems.

Tech Giants Pushing ADS Technology

For those exploring ADS technology’s opportunities, consider SAE International’s efforts to date. It is a global association of more than 128,000 engineers and related technical experts in the aerospace, automotive and commercial-vehicle industries [https://www.sae.org/misc/pdfs/automated_driving.pdf]. It’s worth reading their description of then levels of Driving Automation to better understand the challenges involved. Although it certainly addresses vehicles, it also has aerospace implications. Military drones today, and some missiles, use AI to perform their assigned tasks. Decades ago, fully automated aircraft with automated air traffic control (ATC) was demonstrated.

A few weeks ago, Brian Krzanich, the chief executive officer of Intel Corp., announced the fact that Waymo and Intel were collaborating on self-driving car technology.

Autonomous Driving will End Human Driving Errors and Lead to Safer Roads for Everyone.
Per Mr. Krzanich, “One of the big promises of artificial intelligence (AI) is our driverless future. Nearly 1.3 million people die in road crashes worldwide every year – an average 3,287 deaths a day1. Nearly 90 percent of those collisions are caused by human error. Self-driving technology can help prevent these errors by giving autonomous vehicles the capacity to learn from the collective experience of millions of cars – avoiding the mistakes of others and creating a safer driving environment.

“Given the pace at which autonomous driving is coming to life, I fully expect my children’s children will never have to drive a car. That’s an astounding thought: Something almost 90 percent of Americans do every day will end within a generation. With so much life-saving potential, it’s a rapid transformation that Intel is excited to be at the forefront of along with other industry leaders like Waymo.

“Waymo’s newest vehicles, the self-driving Chrysler Pacifica hybrid minivans, feature Intel-based technologies for sensor processing, general compute and connectivity, enabling real-time decisions for full autonomy in city conditions. As Waymo’s self-driving technology becomes smarter and more capable, its high-performance hardware and software will require even more powerful and efficient compute. By working closely with Waymo, Intel can offer Waymo’s fleet of vehicles the advanced processing power required for level 4 and 5 autonomy.

“With 3 million miles of real-world driving, Waymo cars with Intel technology inside have already processed more self-driving car miles than any other autonomous fleet on U.S. roads. Intel’s collaboration with Waymo ensures Intel will continue its leading role in helping realize the promise of autonomous driving and a safer, collision-free future,” added Krzanich

Consumer Adoption of ADS Driving & On-Demand Car Services
The Gartner Consumer Trends in Automotive online survey, conducted from April 2017 through May 2017, polled 1,519 people in the U.S. and Germany, found that 55 percent of respondents will not consider riding in a fully autonomous vehicle, while 71 percent may consider riding in a partially autonomous vehicle. Gartner, Inc. is a global research and advisory company [http://www.gartner.com/technology/home.jsp].

The Gartner survey say that “…concerns around technology failures and security are key reasons why many consumers are cautious about fully autonomous vehicles. Fear of autonomous vehicles getting confused by unexpected situations, safety concerns around equipment and system failures and vehicle and system security are top concerns around using fully autonomous vehicles,” explains Mike Ramsey, research director at Gartner. Survey respondents agreed that fully autonomous vehicles do offer many advantages, including improved fuel economy and a reduced number and severity of crashes. Additional benefits they identified include were having a safe transportation option when drivers are tired and using travel time for entertainment and work.

The survey also found that consumers who currently embrace on-demand car services are more likely to ride in and purchase partially and fully autonomous vehicles. “This signifies that these more evolved users of transportation methods are more open toward the concept of autonomous cars,” said Mr. Ramsey.

Next time: More on widespread ADS/ADAS (advanced driver assistance systems) efforts worldwide.

Ion Assisted Deposition of Thin Films

Deposition of thin films by thermal and electron beam (e-beam) evaporation processes was discussed in the previous two Blogs. One of the major problems with these processes is the low energy of the evaporated atoms, which can cause problems in the thin films such as poor adhesion to the substrate, reduced density, porous and columnar microstructure, increased water pick up and poor mechanical properties. Typically, the substrate is heated to several hundred degrees Celsius during coating to mitigate this effect, but it is by no means eliminated. Ion assisted deposition (IAD) and ion plating mitigate many of these problems by providing enhanced energy to the evaporated atoms and ion cleaning the substrate [1].

For optical coatings, problem with porous films is that they can subsequently absorb moisture, which changes the refractive index of the layer(s), and can cause shifts in the center wavelength with changes in ambient temperature and humidity.  Low density also limits mechanical durability to some extent, although these films can typically meet most of the MIL-SPEC durability and environmental requirements.  Furthermore, the requirement to heat the components during processing can limit substrate material choice, and also introduce stress in the substrate due to thermal cycling.

Figure 1 below shows the placement of an ion source in a typical deposition chamber. Bombardment prior to deposition is used to sputter clean the substrate surface. Bombardment during the initial deposition phase can modify the nucleation behavior of the depositing material such as nucleation density. During deposition the bombardment is used to modify and control the morphology and properties of the depositing film such as film stress and density. It is important, for best results, that the bombardment be continuous between the cleaning and the deposition portions of the process in order to maintain an atomically clean interface [1]. IAD adds a high energy ion beam that is directed at the part to be coated.  These ions impart energy to the deposited material, acting almost like an atomic sized hammer, resulting in a higher film density than achieved with purely evaporative methods.

The higher density of IAD coatings generally gives them more mechanical durability, greater environmental stability and lower scatter than conventional evaporated films.  Furthermore, the amount of energetic bombardment can be varied from zero to a maximum level on a layer by layer basis, giving the process tremendous flexibility.  For example, while IAD is not compatible with some of the commonly used materials in the infrared, it can be used solely on the outermost layer to yield an overall coating with superior durability.  The ion energy can also be used to modify the intrinsic stress of a film during deposition.  In some cases, this can change the film stress from tensile to compressive, which can help to maintain substrate surface figure, especially when depositing thick infrared coatings.

Several types of ion source are available, including Kaufman type, end Hall type, cold and hollow cathode types. Each source differs in how ions are created and accelerated to the substrate and ion energy distribution. All these sources ionize inert gases such as argon and krypton to bombard the substrate. Ions are extracted by several methods, grid extraction (Kaufman type) where ions are relatively monoenergetic or from a broad beam ion source (end Hall and hollow cathode) having a spectrum of ion energies. Figures 2 and 3 show cross sections of Kaufman and end Hall ion sources [1,2,3]. The Kaufman source is gridded to control energy of exiting ions. Ion current available from the ion source are determined by source parameters, such as gas pressure, cathode power, anode potential, geometry, etc. The accelerator grid serves two purposes: 1) to extract the ions from the discharge chamber, and 2) to determine the ions trajectories, i.e. focusing. Table 1 shows operating parameters for this ion source.

Table 1. Maximum argon ion beam current

Discharge voltages for end Hall ion sources typically range from 40 – 300V. Discharge currents for hollow cathode designs range from 30 μA – 5A with discharge voltages up to 16 kV [4]. This unit is also used for ion etching and ion sputtering.

A wide range of properties of evaporated (and other PVD processes as will) coatings improve with increased density and packing density resulting from ion bombardment. Effects of IAD are no more apparent than in moisture stability and stress in optical coatings [2]. Moisture can readily penetrate low density films deposited using low energy processes, causing a distinct shift in optical properties (refractive index, absorption), as shown in Figure 4. We see that moisture shift in the refractive index is significant for as deposited HfO2 films while negligible for IAD films.

Stress in evaporated thin films deposited without IAD is generally tensile. Compressive stress is desirable in thin films to enhance mechanical properties and reduce cracking, although all stress should be kept to a minimum. By removing loosely bound atoms IAD increases film density and can change stress from tensile to compressive. The capability to tune stress is particularly valuable in multilayer films. Thus it is possible to vary stress of alternating layers from tensile to compressive, thus achieving very low stress in the resulting structure.

However, with IAD there can be too much of good thing and defects can form during deposition. As a result of this process ion energy can be given up to the growing layer either at the surface of in the underlying regions [5]. Atom displacements responsible for lattice damage (voids, lattice modification) are produced by energy deposition in bulk regions of the film. Contrast this with surface smoothing described previously. Additionally, inert gases can also be driven into the film. An extension of this effect is ion implantation used extensively in semiconductor technology. Fortunately, the threshold energy needed for bulk defect formation is higher than the threshold for surface driven processes. Care must thus be taken not to use excessive ion energy or bulk defects will be created.

Ion sources are also used for ion beam sputter deposition of thin films, but that will be addressed in a future Blog.

Figure 1. Ion source placement in deposition chamber.

Figure 2. Kaufman ion source geometry [3]
Figure 3. End Hall ion source geometry [1]
Shift in refractive index for evaporated HfO2 films with and without IAD [2]
Reference:

  1. Donald M. Mattox, in Handbook Deposition Technologies for Films and Coatings, 3rd Ed., P M Martin Ed., Elsevier (2009).
  2. D. E. Morton, V. Fridman, 41st Annual Technical Conference Proceedings of the SVC, 297 (1998).
  3. South Bay Technology, Applications Laboratory Report 123.
  4. J Alessi et al., Brookhaven Laboratory Report 102494 -2013 CP.
  5. A R Gonzalex-Elipe, F Yubero & J M Sanz. Low Energy Ion Assisted Film Growth, Imperial College Press (2003).

 

AI & Virtual Reality—Let The Games (and Work) Begin!

Artificial Intelligence (AI) and Virtual Reality (VR) have enormous potential for entrepreneurs, designers, software experts, video gaming developers and high-tech manufacturers. Some common AI and VR essentials include very powerful computing power in extremely small/tiny spaces and exceptionally high definition displays—real or projected. There will be a requirement for many sensors and actuators too. If today’s marketplace prices are any indication, consumers, businesses and militaries will pay handsomely for the very best. But that presents several challenges including terminology.

AI terminology seems to be described somewhat uniformly but what a given company describes as AI can vary enormously. VR has several common descriptive variants since enhanced reality is more affordable to manufacture and more affordable for consumers. In recent announcements, Microsoft has announced Windows Mixed Reality headsets as part of a new thrust, “We are on a mission to help empower every person and organization on the planet to achieve more, and one of the ways we are doing that is through the power of mixed reality,” said Alex Kipman, Technical Fellow at Microsoft. This is a follow to their earlier HoloLens headsets. The technology behind it will allow its flagship operating system to use the latest generation of Windows 10 hardware devices and software for augmented and virtual reality technologies experiences.

Caption Windows 10 Mixed Reality headsets from partners Lenovo, Acer, Dell & HP

 

Google’s Alphabet’s X division has moved on to refining their technology with the Google Glass 2.0 headsets the result. Google is targeting business and manufacturing applications that will help boost productivity. The original Google Glass headsets had some appeal but some problems that were addressed for the version 2.0 headsets.

Caption: Google Glass 2.0 headset

 

Augmented Reality

“AR is a live direct or indirect view of a physical, real-world environment whose elements are “augmented” by computer-generated or extracted real-world sensory input such as sound, video, graphics or GPS data. It is related to a more general concept called computer-mediated reality, in which a view of reality is modified (possibly even diminished rather than augmented) by a computer. Augmented reality enhances one’s current perception of reality, whereas in contrast, virtual reality replaces the real world with a simulated one.[1][2] Augmentation techniques are typically performed in real time and in semantic context with environmental elements, such as overlaying supplemental information like scores over a live video feed of a sporting event,” according to Wikipedia. [https://en.wikipedia.org/wiki/Augmented_reality]

Apple is excited about Augmented Reality and will soon introduce such capabilities soon on iPhones and iPads. There are rumors that they may intro glasses too. What is for real there is Apple’s ARKit, a set of software developer tools for creating augmented reality apps or iOS.

According to Apple developers, “Apps can use Apple’s augmented reality (AR) technology, ARKit, to deliver immersive, engaging experiences that seamlessly blend realistic virtual objects with the real world. In AR apps, the device’s camera is used to present a live, onscreen view of the physical world. Three-dimensional virtual objects are superimposed over this view, creating the illusion that they actually exist. The user can reorient their device to explore the objects from different angles and, if appropriate for the experience, interact with them using gestures and movement.” With their track record on iOS Apps for iPhones and iPads, that may give them an edge.

Not to be overlooked, however, is the now famous and impressive Oculus Rift virtual reality headset, now a Facebook company. It’s still expensive but very competitive. “Content for the Rift is developed using the Oculus PC SDK, a free proprietary SDK available for Microsoft Windows (OSX and Linux support is planned for the future).[62] This is a feature complete SDK which handles for the developer the various aspects of making virtual reality content, such as the optical distortion and advanced rendering techniques”, according to Wikipedia.

With Augmented Reality (AR), one can immerse themselves in games, education, work, movies or whatever. The big question, however, is will this be just a short-term fad or will it be enduring like the personal computer or smartphone? We just don’t know yet but the question of industry-wide standards versus proprietary platforms will likely be an issue. We do know that familiar tech giants are betting on VR and AR for current and future products. What is certain is that thin-film technologies are essential for manufacturing the LCD, OLED or other screens used in the requisite headsets. If you thought that building 4K or 8K HDTVs was challenging, VR and ER in small form factors will make it interesting. The VR headset options abound.

A Crowded Market

“By 2016 there were at least 230 companies developing VR-related products. Facebook has 400 employees focused on VR development; Google, Apple, Amazon, Microsoft, Sony and Samsung all had dedicated AR and VR groups. Dynamic binaural audio was common to most headsets released that year. However, haptic interfaces were not well developed, and most hardware packages incorporated button-operated handsets for touch-based interactivity. Visually, displays were still of a low-enough resolution and frame-rate that images were still identifiable as virtual. On April 5, 2016, HTC shipped its first units of the HTC VIVE SteamVR headset. This marked the first major commercial release of sensor-based tracking, allowing for free movement of users within a defined space. “ per Wikipedia [https://en.wikipedia.org/wiki/Virtual_reality_headset].

Is AI Ready?

“We are at an inflection point in the development and application of AI technologies,” according to the Partnership on AI [https://www.partnershiponai.org/introduction/]. “The upswing in AI competencies, fueled by data, computation, and advances in algorithms for machine learning, perception, planning, and natural language, promise great value to people and society.

“However, with successes come new concerns and challenges based on the effects of those technologies on people’s lives. These concerns include the safety and trustworthiness of AI technologies, the fairness and transparency of systems, and the intentional as well as inadvertent influences of AI on people and society.

“On another front, while AI promises new capabilities and efficiencies, the advent of these new technologies has raised understandable questions about potential disruptions to the nature and distribution of jobs. While there is broad agreement that AI advances are poised to generate great wealth, it remains uncertain how that wealth will be shared broadly. We do, however, also believe that there will be great opportunities to harness AI methods to solve important societal challenges.

“We designed the Partnership on AI, in part, so that we can invest more attention and effort on harnessing AI to contribute to solutions for some of humanity’s most challenging problems, including making advances in health and wellbeing, transportation, education, and the sciences.”

Some founding members of this Partnership on AI include Apple Inc., Amazon, DeepMind, Facebook, Google (Android, Chrome), IBM (Watson computer), Intel Corp., Microsoft, Sony (PlayStation) and the The Association for the Advancement of Artificial Intelligence (AAAI).

What Is Needed

In future blogs, we’ll revisit some specific AI and VR applications of interest and challenges.

Observations & Opportunities

Faster Changes and the Implications

We are indeed in the midst of rapidly changing times. Some of today’s global issues and challenges are the result of using various technologies without thinking the impact through. Regardless, science and technology will be essential for solving these problems.

Something to keep in mind while contemplating tackling anything new with science and technology. It is prudent to consider the impact of any new product or manufacturing process itself, all of the equipment and materials involved, and what happens throughout the world as advanced technologies permeate virtually everything in our business or personal lives.

What’s Really New

We all have some readily available topics in mind when “new” technologies are mentioned but we probably overlook several areas of great potential. Below is the thought-provoking Hype Cycle for Emerging Technologies, 2017 chart from Gartner Inc. Every topic on the chart is significant and there are overlapping interactions between these new technologies that are likely unpredictable. Someone invariably uses a product or technology for something totally overlooked by the original technology inventors and proponents—and unintended consequences result.

Hype Cycle for Emerging Technologies, 2017. Source: Gartner July 2017

Gartner, Inc., of Stamford, Connecticut, is a global research and advisory company. It helps business leaders in every industry and enterprise with some objective insights needed to make the right decisions with analyses and predictions that can prove insightful.

As Gartner states in its “Gartner Identifies Three Megatrends That Will Drive Digital Business Into the Next Decade” [http://www.gartner.com/newsroom/id/3784363], “The emerging technologies on the Gartner Inc. Hype Cycle for Emerging Technologies, 2017 reveal three distinct megatrends that will enable businesses to survive and thrive in the digital economy over the next five to 10 years. According to Mike J. Walker, research director at Gartner, “Artificial intelligence (AI) everywhere, transparently immersive experiences and digital platforms are the trends that will provide unrivaled intelligence, create profoundly new experiences and offer platforms that allow organizations to connect with new business ecosystems.”

The Hype Cycle for Emerging Technologies report is the longest-running annual Gartner Hype Cycle, providing a cross-industry perspective on the technologies and trends that business strategists, chief innovation officers, R&D leaders, entrepreneurs, global market developers and emerging-technology teams should consider in developing emerging-technology portfolios.

“The Emerging Technologies Hype Cycle is unique among most Gartner Hype Cycles because it garners insights from more than 2,000 technologies into a succinct set of compelling emerging technologies and trends. This Hype Cycle specifically focuses on the set of technologies that is showing promise in delivering a high degree of competitive advantage over the next five to 10 years,” added Walker.

Their Hype Cycle graphic, above, touches upon many exciting technologies including IoT (Internet of Things), autonomous vehicles, nanotube electronics and many other key categories. The obvious thread that ties all of these things together is ubiquitous computing and communications access plus advanced software. Gartner’s take on when these technologies reach mainstream adoption seems reasonable today but unforeseen events, or just some major company’s very secret product launch plans, could significantly alter those predictions.

Just recall Apple Inc.’s launch of the iPhone a decade ago. We already had our cell phones but Apple’s elegant design plus marketing spin on what consumers wanted, or could be convinced that they needed and could use, proved remarkable. The iPhone changed mobile information access and use in profound ways as well as computing in general.

Today’s plethora of smart phones, tablets and laptops are all influenced by the iPhone to varying degrees. With numerous companies like Apple and Google pursuing Augmented Reality (AR) and/or Virtual Reality (VR) to make information access and usage better while improving work or entertainment experiences, these technologies can have both predicted and unintended consequences too.

The Internet is essential now for most businesses and consumers which has driven its enormous growth. However, Internet security concerns are something that requires ongoing vigilance to avoid viruses, malware and scams to protect its viability. It also requires products with built-in protection and the ability to upgrade protection as new threats emerge.

AI or VR features are already appearing in more consumer products as well as the Internet of Things (IoT) for consumers and businesses as well. VR and ER plus more powerful personal computers, smartphones and tablets are certainly changing how many work and play. Not everyone will need and/or use all of these things but the impact will be felt by everyone. If you want to check the weather, you probably get more useful information quicker on your smartphone or tablet than on your PC.

A pixel comparison chart of SD (standard definition), Full HD (1080p), 4K Ultra HD and 8K Ultra HD displays. Image by By Libron – Own work, CC0, https://commons.wikimedia.org/w/index.php?curid=25976260

Whether it is your laptop, desktop, smartphone, tablet, 4K or soon 8K TV, automobile dashboard, thermostat or any other product with an electronic display, it is obvious that display technology continues to advance too. Many were content with HDTV (1080p) until the 4K TVs appeared and the better pictures combined with a marketing push made 4K Ultra HD TV a must-have product for many people. Soon the 8K push will begin as soon as enough cameras are available for content production, movies at first and TV later. Of course, Blu-ray and other disc player manufacturers will have to add upconversion for the 8K sets. Since high-speed Internet is still not available everywhere, discs will prove useful for a while.

AI technologies will be the most disruptive class of technologies over the next 10 years due to radically increased computational power, near-endless amounts of data, and unprecedented advances in deep neural networks; these will enable organizations and governments with AI technologies to harness data in order to adapt to new situations and solve problems that no one has ever encountered previously.

Walker believes that enterprises seeking leverage in this theme should consider the following technologies: Deep Learning, Deep Reinforcement Learning, Artificial General Intelligence, Autonomous Vehicles, Cognitive Computing, Commercial UAVs (Drones), Conversational User Interfaces, Enterprise Taxonomy and Ontology Management, Machine Learning, Smart Dust, Smart Robots and Smart Workspace. This will require some serious homework to grasp the implications and the best ways to use it. These new tools may raise concerns about George Orwell’s novel Nineteen Eighty-Four becoming real by some observers. Of course the novel was just fiction.

Note: Upcoming weblogs will address why MEMS devices are essential in mobile devices and many other products and why vacuum technology is essential for their manufacturing.

Vacuum Observations & Perspectives

In this weblog series, we’ll examine some emerging and evolving opportunities for vacuum-centric equipment, materials, processes and R&D. Also, we’ll look at some interesting applications and technology trends. Thinking outside the box, if you will, with a broad perspective.

Vacuum use is only likely to increase going forward, especially when producing very small and complex components like ICs and MEMS for sophisticated products. Doing so in a vacuum is the only practical way to prevent gaseous and particulate contamination. Vacuum is absolutely essential for semiconductor production.

Semiconductor Challenges—Manufacturing Is Difficult

The familiar semiconductor industry is changing, significantly, now. These changes provide many opportunities for those companies involved that can meet the challenging technical and investment demands. The microchip industry itself is always continuing its quest for ever smaller geometries (circuit features) to get more devices produced per wafer area. That’s how Moore’s Law has keep innovation and performance increases alive.

By constantly pushing new frontiers of equipment, materials, and processes, chipmakers produce the ubiquitous integrated circuit (IC) chips at reasonable prices. Historically, jumps in wafer size were tied to photolithography advances that enabled smaller feature sizes. The result was more advanced semiconductor chips with greater performance. Transistors are much smaller now and circuits more complex but greater performance results. But they are extremely difficult to manufacture.

The greatest demand for computer chips now is in mobile devices, wireless, IoT and automotive areas. Laptops, tablets and smartphones all benefit. Since desktop PC performance is already greater than what most customers actually need, mobile devices are often a primary focus. For products where space and extreme performance are not necessary, conventional IC manufacturing fabs will be cranking out those chips for years. You certainly don’t need state-of-the-art microprocessors in your home’s thermostat.

Monolithic ICs

Semiconductor manufacturing, often considered a mature industry, needs near-perfect thin films and truly advanced lithography patterning along with sophisticated deposition and etching process technologies to produce the active and passive elements in ICs—the core of most high tech products worldwide. Planar 2D production is mainstream now in semiconductor manufacturing but some high-profit, in-demand chips are now made with 3D circuitry. Conventional optical photolithography now appears to be at the end of its affordable smaller dimensions potential but there is a promising new exposure system: EUVL.

2D, 3D & EUV Lithography

To produce more chips in the same wafer surface space, you need smaller chips that provide the functionality but in smaller dimensions. Greater functionality and faster performance are the norms. If the latest EUVL (Extreme UltraViolet Lithography) tools prove ready for prime time with high-volume production next year, the quality and flatness of the many deposited thin film layers will become even more critical than they are now. EUV needs a stringent vacuum environment. See “THE USE OF EUV LITHOGRAPHY IN CONSUMER MICROCHIP MANUFACTURING” at http://www.pitt.edu/~budny/papers/23.pdf for some insights.

In July 2017, ASML Holding N.V. (ASML) President and Chief Executive Officer Peter Wennink said, “In EUV lithography, we have integrated an upgraded EUV source into a TWINSCAN NXE:3400B lithography system in our Veldhoven [The Netherlands] facility and achieved the throughput specification of 125 wafers per hour on this system. Now, with all key performance specifications demonstrated, we focus on achieving the availability that is required for high-volume manufacturing as well as further improving productivity.”

That sounds promising but the actual EUV masks and photoresists seem somewhat problematic from published comments and reports. In July 2017, BACUS (formerly the Bay Area Chrome Users Society) noted, “Recently, readiness of the EUVL infrastructure for the high volume manufacturing (HVM) has been accelerated [1]. EUV source availability, the first showstopper against EUVL HVM, has been dramatically increased and close to the targets for HVM insertion. Mask defectivity, another focus area for the HVM, has also been concerned. Due to the difference in mask and optics appropriate for the wavelength between EUV and ArF lithography, specialized metrology tools are required in EUVL. However, current DUV and e-beam inspection tools are easy to miss the printable phase defects in EUV mask since the lights of corresponding wavelengths cannot penetrate multilayers (MLs)[2,3]. Therefore, the actinic review system is essential to provide defect free EUV masks.” See more details at https://spie.org/Documents/Membership/BacusNewsletters/BACUS-Newsletter-July-2017.pdf

The Samsung R&D experts who authored this are betting on EUV and 7nm lithography to take some IC foundry business away from the leading foundry producer TSMC (Taiwan Semiconductor Manufacturing Co.) but TSMC is also planning on introducing 7nm EUV devices next year. We’ll see. EUV keeps getting promised as “soon” but the dates keep slipping. Reuters noted, “But the firm lags well behind Taiwan’s TSMC in contract manufacturing: TSMC held a market share of 50.6 percent last year compared with Samsung’s 7.9 percent, according to research firm IHS. It also trailed U.S.-based Global Foundries, which had a 9.6 percent share, and Taiwan-based UMC’s 8.1 percent.”

Wafer Sizes Matter

First, there are wafer size considerations. Today, 200mm wafer fabs are typically running at capacity with some new 200mm fabs being built but some essential 200mm fabrication production tools are in short supply and expensive. Some of these fabs once made high-volume state-of-the-art ICs but have transitioned to more profitable proprietary and/or lower volume chips. Supporting 200mm will be necessary for the foreseeable future.

Per Christian G. Dieseldorff, Industry Research & Statistics Group, SEMI, at SEMICON West 2017, “Driven by mobile and wireless applications, IoT (Internet of Things), and automotive, the 200mm market is thriving.  Many of the products used in these applications are produced on 200mm wafers, so companies are expanding capacity in their facilities to the limit, and there are nine new 200mm facilities in the pipeline. Looking only at IC volume fabs, the report shows 188 fabs in production in 2016 and expanding to 197 fabs by 2021. China will add most of the 200mm capacity through 2021, with 34 percent growth rate from 2017 to 2021, followed by South East Asia with 29 percent and the Americas with 12 percent.”

The 300mm fabs, once expected to displace 200mm fabs, are now competing with 200mm in some markets where smaller volumes can make 300mm efforts more expensive. But 300mm is running at capacity too with the most advanced chips. There also has been the realization by the major semiconductor manufacturers that many improvements to optimize 300mm manufacturing are possible which delays the costly transition to building factories that handle 450mm wafers.

Finally, 450mm seems to be a wafer size that can wait, perhaps until 2020 per SEMI [http://www.semi.org/en/node/50856]. Also, the New York-based 450mm global consortium, G450C, is defunct. Samsung, and others, are now stacking many layers of transistors on the same memory die with smaller transistors, so the need for 450mm wafers is not as urgent now although companies are still exploring 450mm options for the future.

Then, of course, there is the high-end leading edge efforts to switch to EUV lithography in the quest to produce even more ICs per unit area on a wafer. EUV emerged when x-ray proved problematic years ago. X-ray lithography was first proposed by H. Smith and Spears at MIT. [https://www.researchgate.net/publication/299496830_X-ray_Lithography_Some_History_Current_Status_and_Future_Prospects]. There are recollections of the frantic x-ray lithography efforts several years ago that never reached mainstream production status. X-ray lithography promised > 1nm feature sizes, far smaller than the EUV efforts of today will produce.

Mainstream x-ray lithography simply had too many issues at the time: dangerous x-ray sources as well as expensive masks and resists that were problematic. Proposed x-ray synchrotron radiation sources required very long times to reach acceptable low vacuum levels which is problematic for volume production lines that cannot stop operating for maintenance. For some perspective on x-ray lithography’s origins, check out “X-ray lithography: Some history, current status and future prospects” by Juan R. Maldonado and Martin Peckerarat https://www.researchgate.net/publication/299496830_X-ray_Lithography_Some_History_Current_Status_and_Future_Prospects.

Note: Upcoming weblogs will address IC lithography issues and why MEMS devices are essential in mobile devices and many other products.

Deposition Technology: Electron Beam Evaporation

Electron beam (e-beam) evaporation involves heating the material by an electron beam as opposed to thermal evaporation which uses resistive heating (discussed in previous Blog). A typical e-beam evaporation system is shown schematically in Figure 1. E-beam heated sources have two major benefits:

  • Very high power density and as a result, a wide range of control over evaporation rates from very low to very high
  • Evaporant is contained in a water cooled Cu hearth, thus eliminating the problem of crucible contamination

While it will not be discussed in any detail, the two major types of e-beam sources (guns) are thermionic and plasma types (details of these guns can be found in reference 1). Deposition rate in this process can be as low as 1 nm/min to as high as a few μm/min. Material utilization efficiency is high relative compared to other PVD methods and the process offers structural and morphological control of films (discussed shortly). Due to the very high deposition rate, this process has been used in industrial applications for wear resistant and thermal barrier coatings in aerospace industries, hard coatings for cutting and tool industries, and electronic and optical films for semiconductor industries and thin film solar applications.

Note that one of the major problems with these processes is the low energy of the evaporated atoms, which can cause problems in the thin films such as poor adhesion to the substrate, reduced density, porous and columnar microstructure, increased moisture pick up and poor mechanical properties. This will be addressed in the next Blog.

Deposition of single elements by evaporation is straight forward. Deposition of alloys with two or more components, however, can be a challenge due to different vapor pressures and different evaporation rates. The solution to this problem is to use multiple sources, one for each constituent of the alloy. Additionally, the material evaporated from each source can be a metal, alloy or compound. Challenges of this process are calibrating deposition rates for each source, ensuring that the adatom beam from each source uniformly coincides with the beams from the other sources and that deposited atoms are sufficiently blended and obtaining uniform density of each material over the substrate surface.

Difficulties of direct evaporation to form oxides, nitrides, fluorides, carbides, etc. are due to fragmentation of vaporized compounds. Reactive evaporation can overcome many of these problems. Here metals are evaporated in the presence of reactive gas. A schematic of a typical reactive evaporation system is also shown in Figure 1. The problem here is that most oxides are substoichiometric due to the low energy of the evaporated adatoms and that deposition rates can suffer. In most cases stoichiometric films are deposited only as low deposition rates. Reaction kinetics (i.e., low energy of evaporated atoms/molecules) prohibit full oxidation or full reaction with the reactive species, which limits the potential applications of these films as optical coatings in particular. To this end, activated reactive evaporation (ARE) was developed [1,2]. This process generally involves evaporation of a metal or alloy in the presence of a plasma of a reactive gas, hence the term “activated”. For example, TiC and TiN coatings are deposited by evaporation Ti in a C2H2 or N2 plasma respectively. The plasma has the following roles:

  • Enhance reactions that are necessary for deposition of compound films
  • Modify growth kinetics and, as a result, structure/morphology of deposits

Applications for evaporated thin film coatings include:

  • Multilayer optical coatings
  • Metallization
  • Large area metal mirror coatings for telescopes
  • Metallization for roll coating
  • Decorative coatings
  • Hard coatings
  • Tribological coatings
  • Thin film solar cells
  • Diffusion barriers
  • Thermal barrier coatings
  • Organic thin films

Evaporation was one of the first processes used extensively for the deposition of thin films. This process enabled the use of thin films in a wide variety of applications, which lowered manufacturing costs and expanded functionality of bulk materials. Large area coverage was now possible. Evaporation and related processes are still used extensively to synthesize a wide variety of thin film coating materials.

A full range of evaporated optical coatings is being produced, including AR on glass and plastic, mirror coatings, beam splitters, filters, transparent conductive coatings, dichroic coatings, attenuation coatings and fiber optic coatings [3]. Coating of large areas, as well as high speed web coating, are possible with e-beam systems. Figure 2 shows the evaporated 8m Al Gemini telescope mirror coating. These have come a long way from the evaporated quartz/MgF2/cryolite/Al coatings in 1936 [4].

Reference:

  1. S Ismat et al., Chapter 4, Handbook of Deposition Technologies for Films and Coatings, Science, Applications and Technology, 3rd Ed., Peter M Martin Ed., Elsevier (2010).
  2. R F Buhshah & C Deshpandey, in Physics of Thin Films, J L Vossen and M H Francombe (Eds.), Academic Press (1987).
  3. See http://www.evaporatedcoatings.com/index.php?WTX=gaw.
    J Strong, J Opt Soc Am 26 (1936) 73.