AI & Virtual Reality—Let The Games (and Work) Begin!

Artificial Intelligence (AI) and Virtual Reality (VR) have enormous potential for entrepreneurs, designers, software experts, video gaming developers and high-tech manufacturers. Some common AI and VR essentials include very powerful computing power in extremely small/tiny spaces and exceptionally high definition displays—real or projected. There will be a requirement for many sensors and actuators too. If today’s marketplace prices are any indication, consumers, businesses and militaries will pay handsomely for the very best. But that presents several challenges including terminology.

AI terminology seems to be described somewhat uniformly but what a given company describes as AI can vary enormously. VR has several common descriptive variants since enhanced reality is more affordable to manufacture and more affordable for consumers. In recent announcements, Microsoft has announced Windows Mixed Reality headsets as part of a new thrust, “We are on a mission to help empower every person and organization on the planet to achieve more, and one of the ways we are doing that is through the power of mixed reality,” said Alex Kipman, Technical Fellow at Microsoft. This is a follow to their earlier HoloLens headsets. The technology behind it will allow its flagship operating system to use the latest generation of Windows 10 hardware devices and software for augmented and virtual reality technologies experiences.

Caption Windows 10 Mixed Reality headsets from partners Lenovo, Acer, Dell & HP

 

Google’s Alphabet’s X division has moved on to refining their technology with the Google Glass 2.0 headsets the result. Google is targeting business and manufacturing applications that will help boost productivity. The original Google Glass headsets had some appeal but some problems that were addressed for the version 2.0 headsets.

Caption: Google Glass 2.0 headset

 

Augmented Reality

“AR is a live direct or indirect view of a physical, real-world environment whose elements are “augmented” by computer-generated or extracted real-world sensory input such as sound, video, graphics or GPS data. It is related to a more general concept called computer-mediated reality, in which a view of reality is modified (possibly even diminished rather than augmented) by a computer. Augmented reality enhances one’s current perception of reality, whereas in contrast, virtual reality replaces the real world with a simulated one.[1][2] Augmentation techniques are typically performed in real time and in semantic context with environmental elements, such as overlaying supplemental information like scores over a live video feed of a sporting event,” according to Wikipedia. [https://en.wikipedia.org/wiki/Augmented_reality]

Apple is excited about Augmented Reality and will soon introduce such capabilities soon on iPhones and iPads. There are rumors that they may intro glasses too. What is for real there is Apple’s ARKit, a set of software developer tools for creating augmented reality apps or iOS.

According to Apple developers, “Apps can use Apple’s augmented reality (AR) technology, ARKit, to deliver immersive, engaging experiences that seamlessly blend realistic virtual objects with the real world. In AR apps, the device’s camera is used to present a live, onscreen view of the physical world. Three-dimensional virtual objects are superimposed over this view, creating the illusion that they actually exist. The user can reorient their device to explore the objects from different angles and, if appropriate for the experience, interact with them using gestures and movement.” With their track record on iOS Apps for iPhones and iPads, that may give them an edge.

Not to be overlooked, however, is the now famous and impressive Oculus Rift virtual reality headset, now a Facebook company. It’s still expensive but very competitive. “Content for the Rift is developed using the Oculus PC SDK, a free proprietary SDK available for Microsoft Windows (OSX and Linux support is planned for the future).[62] This is a feature complete SDK which handles for the developer the various aspects of making virtual reality content, such as the optical distortion and advanced rendering techniques”, according to Wikipedia.

With Augmented Reality (AR), one can immerse themselves in games, education, work, movies or whatever. The big question, however, is will this be just a short-term fad or will it be enduring like the personal computer or smartphone? We just don’t know yet but the question of industry-wide standards versus proprietary platforms will likely be an issue. We do know that familiar tech giants are betting on VR and AR for current and future products. What is certain is that thin-film technologies are essential for manufacturing the LCD, OLED or other screens used in the requisite headsets. If you thought that building 4K or 8K HDTVs was challenging, VR and ER in small form factors will make it interesting. The VR headset options abound.

A Crowded Market

“By 2016 there were at least 230 companies developing VR-related products. Facebook has 400 employees focused on VR development; Google, Apple, Amazon, Microsoft, Sony and Samsung all had dedicated AR and VR groups. Dynamic binaural audio was common to most headsets released that year. However, haptic interfaces were not well developed, and most hardware packages incorporated button-operated handsets for touch-based interactivity. Visually, displays were still of a low-enough resolution and frame-rate that images were still identifiable as virtual. On April 5, 2016, HTC shipped its first units of the HTC VIVE SteamVR headset. This marked the first major commercial release of sensor-based tracking, allowing for free movement of users within a defined space. “ per Wikipedia [https://en.wikipedia.org/wiki/Virtual_reality_headset].

Is AI Ready?

“We are at an inflection point in the development and application of AI technologies,” according to the Partnership on AI [https://www.partnershiponai.org/introduction/]. “The upswing in AI competencies, fueled by data, computation, and advances in algorithms for machine learning, perception, planning, and natural language, promise great value to people and society.

“However, with successes come new concerns and challenges based on the effects of those technologies on people’s lives. These concerns include the safety and trustworthiness of AI technologies, the fairness and transparency of systems, and the intentional as well as inadvertent influences of AI on people and society.

“On another front, while AI promises new capabilities and efficiencies, the advent of these new technologies has raised understandable questions about potential disruptions to the nature and distribution of jobs. While there is broad agreement that AI advances are poised to generate great wealth, it remains uncertain how that wealth will be shared broadly. We do, however, also believe that there will be great opportunities to harness AI methods to solve important societal challenges.

“We designed the Partnership on AI, in part, so that we can invest more attention and effort on harnessing AI to contribute to solutions for some of humanity’s most challenging problems, including making advances in health and wellbeing, transportation, education, and the sciences.”

Some founding members of this Partnership on AI include Apple Inc., Amazon, DeepMind, Facebook, Google (Android, Chrome), IBM (Watson computer), Intel Corp., Microsoft, Sony (PlayStation) and the The Association for the Advancement of Artificial Intelligence (AAAI).

What Is Needed

In future blogs, we’ll revisit some specific AI and VR applications of interest and challenges.

Observations & Opportunities

Faster Changes and the Implications

We are indeed in the midst of rapidly changing times. Some of today’s global issues and challenges are the result of using various technologies without thinking the impact through. Regardless, science and technology will be essential for solving these problems.

Something to keep in mind while contemplating tackling anything new with science and technology. It is prudent to consider the impact of any new product or manufacturing process itself, all of the equipment and materials involved, and what happens throughout the world as advanced technologies permeate virtually everything in our business or personal lives.

What’s Really New

We all have some readily available topics in mind when “new” technologies are mentioned but we probably overlook several areas of great potential. Below is the thought-provoking Hype Cycle for Emerging Technologies, 2017 chart from Gartner Inc. Every topic on the chart is significant and there are overlapping interactions between these new technologies that are likely unpredictable. Someone invariably uses a product or technology for something totally overlooked by the original technology inventors and proponents—and unintended consequences result.

Hype Cycle for Emerging Technologies, 2017. Source: Gartner July 2017

Gartner, Inc., of Stamford, Connecticut, is a global research and advisory company. It helps business leaders in every industry and enterprise with some objective insights needed to make the right decisions with analyses and predictions that can prove insightful.

As Gartner states in its “Gartner Identifies Three Megatrends That Will Drive Digital Business Into the Next Decade” [http://www.gartner.com/newsroom/id/3784363], “The emerging technologies on the Gartner Inc. Hype Cycle for Emerging Technologies, 2017 reveal three distinct megatrends that will enable businesses to survive and thrive in the digital economy over the next five to 10 years. According to Mike J. Walker, research director at Gartner, “Artificial intelligence (AI) everywhere, transparently immersive experiences and digital platforms are the trends that will provide unrivaled intelligence, create profoundly new experiences and offer platforms that allow organizations to connect with new business ecosystems.”

The Hype Cycle for Emerging Technologies report is the longest-running annual Gartner Hype Cycle, providing a cross-industry perspective on the technologies and trends that business strategists, chief innovation officers, R&D leaders, entrepreneurs, global market developers and emerging-technology teams should consider in developing emerging-technology portfolios.

“The Emerging Technologies Hype Cycle is unique among most Gartner Hype Cycles because it garners insights from more than 2,000 technologies into a succinct set of compelling emerging technologies and trends. This Hype Cycle specifically focuses on the set of technologies that is showing promise in delivering a high degree of competitive advantage over the next five to 10 years,” added Walker.

Their Hype Cycle graphic, above, touches upon many exciting technologies including IoT (Internet of Things), autonomous vehicles, nanotube electronics and many other key categories. The obvious thread that ties all of these things together is ubiquitous computing and communications access plus advanced software. Gartner’s take on when these technologies reach mainstream adoption seems reasonable today but unforeseen events, or just some major company’s very secret product launch plans, could significantly alter those predictions.

Just recall Apple Inc.’s launch of the iPhone a decade ago. We already had our cell phones but Apple’s elegant design plus marketing spin on what consumers wanted, or could be convinced that they needed and could use, proved remarkable. The iPhone changed mobile information access and use in profound ways as well as computing in general.

Today’s plethora of smart phones, tablets and laptops are all influenced by the iPhone to varying degrees. With numerous companies like Apple and Google pursuing Augmented Reality (AR) and/or Virtual Reality (VR) to make information access and usage better while improving work or entertainment experiences, these technologies can have both predicted and unintended consequences too.

The Internet is essential now for most businesses and consumers which has driven its enormous growth. However, Internet security concerns are something that requires ongoing vigilance to avoid viruses, malware and scams to protect its viability. It also requires products with built-in protection and the ability to upgrade protection as new threats emerge.

AI or VR features are already appearing in more consumer products as well as the Internet of Things (IoT) for consumers and businesses as well. VR and ER plus more powerful personal computers, smartphones and tablets are certainly changing how many work and play. Not everyone will need and/or use all of these things but the impact will be felt by everyone. If you want to check the weather, you probably get more useful information quicker on your smartphone or tablet than on your PC.

A pixel comparison chart of SD (standard definition), Full HD (1080p), 4K Ultra HD and 8K Ultra HD displays. Image by By Libron – Own work, CC0, https://commons.wikimedia.org/w/index.php?curid=25976260

Whether it is your laptop, desktop, smartphone, tablet, 4K or soon 8K TV, automobile dashboard, thermostat or any other product with an electronic display, it is obvious that display technology continues to advance too. Many were content with HDTV (1080p) until the 4K TVs appeared and the better pictures combined with a marketing push made 4K Ultra HD TV a must-have product for many people. Soon the 8K push will begin as soon as enough cameras are available for content production, movies at first and TV later. Of course, Blu-ray and other disc player manufacturers will have to add upconversion for the 8K sets. Since high-speed Internet is still not available everywhere, discs will prove useful for a while.

AI technologies will be the most disruptive class of technologies over the next 10 years due to radically increased computational power, near-endless amounts of data, and unprecedented advances in deep neural networks; these will enable organizations and governments with AI technologies to harness data in order to adapt to new situations and solve problems that no one has ever encountered previously.

Walker believes that enterprises seeking leverage in this theme should consider the following technologies: Deep Learning, Deep Reinforcement Learning, Artificial General Intelligence, Autonomous Vehicles, Cognitive Computing, Commercial UAVs (Drones), Conversational User Interfaces, Enterprise Taxonomy and Ontology Management, Machine Learning, Smart Dust, Smart Robots and Smart Workspace. This will require some serious homework to grasp the implications and the best ways to use it. These new tools may raise concerns about George Orwell’s novel Nineteen Eighty-Four becoming real by some observers. Of course the novel was just fiction.

Note: Upcoming weblogs will address why MEMS devices are essential in mobile devices and many other products and why vacuum technology is essential for their manufacturing.

Vacuum Observations & Perspectives

In this weblog series, we’ll examine some emerging and evolving opportunities for vacuum-centric equipment, materials, processes and R&D. Also, we’ll look at some interesting applications and technology trends. Thinking outside the box, if you will, with a broad perspective.

Vacuum use is only likely to increase going forward, especially when producing very small and complex components like ICs and MEMS for sophisticated products. Doing so in a vacuum is the only practical way to prevent gaseous and particulate contamination. Vacuum is absolutely essential for semiconductor production.

Semiconductor Challenges—Manufacturing Is Difficult

The familiar semiconductor industry is changing, significantly, now. These changes provide many opportunities for those companies involved that can meet the challenging technical and investment demands. The microchip industry itself is always continuing its quest for ever smaller geometries (circuit features) to get more devices produced per wafer area. That’s how Moore’s Law has keep innovation and performance increases alive.

By constantly pushing new frontiers of equipment, materials, and processes, chipmakers produce the ubiquitous integrated circuit (IC) chips at reasonable prices. Historically, jumps in wafer size were tied to photolithography advances that enabled smaller feature sizes. The result was more advanced semiconductor chips with greater performance. Transistors are much smaller now and circuits more complex but greater performance results. But they are extremely difficult to manufacture.

The greatest demand for computer chips now is in mobile devices, wireless, IoT and automotive areas. Laptops, tablets and smartphones all benefit. Since desktop PC performance is already greater than what most customers actually need, mobile devices are often a primary focus. For products where space and extreme performance are not necessary, conventional IC manufacturing fabs will be cranking out those chips for years. You certainly don’t need state-of-the-art microprocessors in your home’s thermostat.

Monolithic ICs

Semiconductor manufacturing, often considered a mature industry, needs near-perfect thin films and truly advanced lithography patterning along with sophisticated deposition and etching process technologies to produce the active and passive elements in ICs—the core of most high tech products worldwide. Planar 2D production is mainstream now in semiconductor manufacturing but some high-profit, in-demand chips are now made with 3D circuitry. Conventional optical photolithography now appears to be at the end of its affordable smaller dimensions potential but there is a promising new exposure system: EUVL.

2D, 3D & EUV Lithography

To produce more chips in the same wafer surface space, you need smaller chips that provide the functionality but in smaller dimensions. Greater functionality and faster performance are the norms. If the latest EUVL (Extreme UltraViolet Lithography) tools prove ready for prime time with high-volume production next year, the quality and flatness of the many deposited thin film layers will become even more critical than they are now. EUV needs a stringent vacuum environment. See “THE USE OF EUV LITHOGRAPHY IN CONSUMER MICROCHIP MANUFACTURING” at http://www.pitt.edu/~budny/papers/23.pdf for some insights.

In July 2017, ASML Holding N.V. (ASML) President and Chief Executive Officer Peter Wennink said, “In EUV lithography, we have integrated an upgraded EUV source into a TWINSCAN NXE:3400B lithography system in our Veldhoven [The Netherlands] facility and achieved the throughput specification of 125 wafers per hour on this system. Now, with all key performance specifications demonstrated, we focus on achieving the availability that is required for high-volume manufacturing as well as further improving productivity.”

That sounds promising but the actual EUV masks and photoresists seem somewhat problematic from published comments and reports. In July 2017, BACUS (formerly the Bay Area Chrome Users Society) noted, “Recently, readiness of the EUVL infrastructure for the high volume manufacturing (HVM) has been accelerated [1]. EUV source availability, the first showstopper against EUVL HVM, has been dramatically increased and close to the targets for HVM insertion. Mask defectivity, another focus area for the HVM, has also been concerned. Due to the difference in mask and optics appropriate for the wavelength between EUV and ArF lithography, specialized metrology tools are required in EUVL. However, current DUV and e-beam inspection tools are easy to miss the printable phase defects in EUV mask since the lights of corresponding wavelengths cannot penetrate multilayers (MLs)[2,3]. Therefore, the actinic review system is essential to provide defect free EUV masks.” See more details at https://spie.org/Documents/Membership/BacusNewsletters/BACUS-Newsletter-July-2017.pdf

The Samsung R&D experts who authored this are betting on EUV and 7nm lithography to take some IC foundry business away from the leading foundry producer TSMC (Taiwan Semiconductor Manufacturing Co.) but TSMC is also planning on introducing 7nm EUV devices next year. We’ll see. EUV keeps getting promised as “soon” but the dates keep slipping. Reuters noted, “But the firm lags well behind Taiwan’s TSMC in contract manufacturing: TSMC held a market share of 50.6 percent last year compared with Samsung’s 7.9 percent, according to research firm IHS. It also trailed U.S.-based Global Foundries, which had a 9.6 percent share, and Taiwan-based UMC’s 8.1 percent.”

Wafer Sizes Matter

First, there are wafer size considerations. Today, 200mm wafer fabs are typically running at capacity with some new 200mm fabs being built but some essential 200mm fabrication production tools are in short supply and expensive. Some of these fabs once made high-volume state-of-the-art ICs but have transitioned to more profitable proprietary and/or lower volume chips. Supporting 200mm will be necessary for the foreseeable future.

Per Christian G. Dieseldorff, Industry Research & Statistics Group, SEMI, at SEMICON West 2017, “Driven by mobile and wireless applications, IoT (Internet of Things), and automotive, the 200mm market is thriving.  Many of the products used in these applications are produced on 200mm wafers, so companies are expanding capacity in their facilities to the limit, and there are nine new 200mm facilities in the pipeline. Looking only at IC volume fabs, the report shows 188 fabs in production in 2016 and expanding to 197 fabs by 2021. China will add most of the 200mm capacity through 2021, with 34 percent growth rate from 2017 to 2021, followed by South East Asia with 29 percent and the Americas with 12 percent.”

The 300mm fabs, once expected to displace 200mm fabs, are now competing with 200mm in some markets where smaller volumes can make 300mm efforts more expensive. But 300mm is running at capacity too with the most advanced chips. There also has been the realization by the major semiconductor manufacturers that many improvements to optimize 300mm manufacturing are possible which delays the costly transition to building factories that handle 450mm wafers.

Finally, 450mm seems to be a wafer size that can wait, perhaps until 2020 per SEMI [http://www.semi.org/en/node/50856]. Also, the New York-based 450mm global consortium, G450C, is defunct. Samsung, and others, are now stacking many layers of transistors on the same memory die with smaller transistors, so the need for 450mm wafers is not as urgent now although companies are still exploring 450mm options for the future.

Then, of course, there is the high-end leading edge efforts to switch to EUV lithography in the quest to produce even more ICs per unit area on a wafer. EUV emerged when x-ray proved problematic years ago. X-ray lithography was first proposed by H. Smith and Spears at MIT. [https://www.researchgate.net/publication/299496830_X-ray_Lithography_Some_History_Current_Status_and_Future_Prospects]. There are recollections of the frantic x-ray lithography efforts several years ago that never reached mainstream production status. X-ray lithography promised > 1nm feature sizes, far smaller than the EUV efforts of today will produce.

Mainstream x-ray lithography simply had too many issues at the time: dangerous x-ray sources as well as expensive masks and resists that were problematic. Proposed x-ray synchrotron radiation sources required very long times to reach acceptable low vacuum levels which is problematic for volume production lines that cannot stop operating for maintenance. For some perspective on x-ray lithography’s origins, check out “X-ray lithography: Some history, current status and future prospects” by Juan R. Maldonado and Martin Peckerarat https://www.researchgate.net/publication/299496830_X-ray_Lithography_Some_History_Current_Status_and_Future_Prospects.

Note: Upcoming weblogs will address IC lithography issues and why MEMS devices are essential in mobile devices and many other products.