CES 2026: Physical AI Moves from Concept to Deployment

CES 2026 marked an inflection point where Physical AI – the convergence of AI with robotics and real-world systems – moved from concept to reality. No longer confined to lab models or tech demos, AI is now embodied in robots on the factory floor, autonomous vehicles on the road, smart infrastructure, and even adaptive energy systems. From the keynote stages to the packed expo halls, AI in physical form was everywhere. Major tech CEOs devoted their speeches to robots and automation, and countless booths featured working machines (not just CGI videos) tackling tasks like warehouse picking, farming, and home assistance. Even startup pods touted “factories-in-a-box” micro-manufacturing units and AI-driven logistics demos. In investor lounges and panel sessions, the buzz was that this is the next tech platform shift. In short, CES 2026 made it clear that AI has broken out of the cloud and is stepping into the real world as deployable solutions – a transformation that dominated this year’s show.

Physical AI Goes Mainstage: Keynotes Signal the Shift

When tech’s biggest leaders took the stage, they reframed the AI conversation from software to the physical realm. NVIDIA’s CEO Jensen Huang captured the mood with a bold pronouncement: “The ChatGPT moment for physical AI is here — when machines begin to understand, reason, and act in the real world.” In past CES years, AI talks focused on digital assistants or content generation; in 2026, the emphasis shifted to AI-native manufacturing and robotics + autonomy. Keynotes highlighted how AI models can drive real-world execution – think factory robots that learn via simulation and self-driving cars that explain their decisions. AMD’s CEO Lisa Su, for example, brought on partners from OpenAI and academia to discuss weaving AI into everything from PCs to industrial systems. Hyundai’s press conference centered not on new car models, but on its robotics strategy – even revealing a partnership with Google’s DeepMind to train humanoid robots. Across these talks, presenters hammered home that this is the next era: AI moving beyond the screen and into warehouses, roads, and homes. The consensus was that Physical AI is no niche experiment but the next platform shift, with 2026 as the tipping point that proved AI’s readiness for real-world deployment.

From Demos to Deployments: What Changed in 2026

Above: Boston Dynamics’ production-ready Atlas robot on stage at CES 2026 – a prime example of the leap from impressive demos to practical deployment. After years of showcasing viral stunts and concept robots, CES 2026 flipped the script with robots performing actual work in real-world scenarios.

For years, CES attendees marveled at robots that could dance, pour drinks, or fold laundry under controlled conditions – cool tricks with an uncertain path to usefulness. In 2026, that changed. This year the robots didn’t just pose for cameras or repeat scripted moves; they actually worked. We saw machines autonomously loading dishwashers, lifting warehouse goods, and navigating busy environments with minimal human oversight. The shift from “wow-factor” demos to deployable products was enabled by clear progress in core capabilities:

  • Reliability & Autonomy: Many robots can now run for hours untethered, self-charge or swap batteries, and handle interruptions. For instance, one humanoid boasts a 4-hour operation time with auto battery swapping, drastically reducing downtime. Companies have focused on hardening robots for continuous duty – meaning far fewer breakdowns when they’re put to work on a factory line or in a field.

  • Vision and Dexterity: Advances in machine vision and manipulation mean robots can perceive and handle a wider variety of objects and tasks. AI-trained models and simulation-driven training allow for more adaptive behavior – e.g. a bot can “learn” how to pick up unfamiliar items or fold clothes without explicit reprogramming. At CES, LG’s service robot CLOiD gingerly retrieved milk cartons and folded shirts, while others like Samsung’s chef bot chopped and cooked. These aren’t just pre-programmed tricks; they reflect genuine strides in computer vision and robot hand-eye coordination.

  • Safety & Human Collaboration: Crucially, 2026’s robots are designed to work with and around people. Improvements like onboard lidar, 360° cameras, and AI safety algorithms let robots detect nearby humans and operate without cages or fenced-off zones. One industrial humanoid, for example, features “fenceless” guarding – it will automatically slow or stop if a person comes too close. This makes it feasible to deploy robots on crowded factory floors or construction sites. New safety standards and certifications debuted at CES for AI-driven machines, indicating confidence that they can operate in human environments without incident.

All these advances translated into real deployments being announced at CES. Instead of concept videos, companies talked delivery dates and pilot programs. Caterpillar, for instance, demonstrated an autonomous excavator powered by NVIDIA’s AI platform – a live construction vehicle doing real work under AI guidance. Startups showed off warehouse bots already sorting packages in beta client facilities. And in perhaps the biggest “about time” moment, Boston Dynamics unveiled a production-ready Atlas humanoid and announced it will be working on Hyundai’s automobile assembly line in Savannah by 2028 – moving from YouTube legend to actual employee. The takeaway: CES 2026 will be remembered as the year physical AI graduated from cool demo to credible deployment.

The Rise of AI-Native Hardware & Robotics

Another striking trend was the emergence of AI-native hardware – robots and devices conceived as physical avatars of AI from the start. These aren’t traditional machines with a bit of AI bolted on; they are built like “AI endpoints” in an intelligent network. At CES, this took many forms. We saw agile humanoid robots (from startups and tech giants alike) designed to operate in human environments, as well as smart mobile robots and drones for factories and farms. The common theme: modular, software-defined designs that can be updated as easily as a smartphone. For example, several vendors highlighted how their robots’ capabilities could improve via over-the-air model updates or cloud simulation training, without changing the physical hardware. This software-first approach to robotics makes them scalable and adaptable – a robot can be repurposed from, say, pallet hauling to floor cleaning by swapping its AI skill module rather than its mechanics.

Key themes in AI-native robotics:

  • Humanoids & Mobile Manipulators: 2026’s show floor was packed with human-like robots (bipeds with arms) and wheeled robots that manipulate objects. Their appeal is versatility – they can navigate spaces built for people and use human tools. Companies like Apptronik and Figure AI (both U.S. startups) quietly showed progress on humanoids meant for industrial work, while Korea’s LG demoed CLOiD as a domestic helper. The fact that nine different humanoid models made headlines at CES suggests this category has arrived. These robots are increasingly capable of common tasks (opening doors, stocking shelves), thanks to better joints, grippers, and AI brains. Humanoids are no longer sci-fi; they’re a major frontier for physical AI adoption across factories, hospitals, and even homes.

  • Autonomous Inspection & Material Handling: Not all robots need arms and legs – many of the most successful physical AI systems are specialized mobiles. CES spotlighted warehouse bots and delivery robots equipped with advanced AI. For instance, several late-stage startups from the U.S. showcased autonomous forklifts and inventory drones that use AI to navigate and check stock without human drivers. Utility and infrastructure firms are deploying robot dogs and crawlers (on wheels or tracks) to inspect power lines, sewers, and industrial sites autonomously. These “hands-off” robots handle the dirty and dangerous jobs with onboard AI making real-time decisions. The trend is robots as roving sensors and haulers, coordinated by cloud software.

  • Multi-Robot Coordination: As individual robots become smarter, scaling up means managing fleets. A notable buzzword was “robot swarms” or multi-robot systems. We saw demos of warehouse robots that communicate with each other to efficiently divvy up tasks. Some vendors offer “robot management” cloud platforms – essentially an OS for a fleet of hundreds of robots on a site. One highlight: Boston Dynamics revealed that its new Atlas units will share skills with each other via a cloud platform (a kind of hive mind learning) so that when one robot learns a new task, all units know it. This kind of coordination and knowledge sharing accelerates deployment at scale. The endgame is an AI-enabled workforce of machines that can be directed collectively, much like distributed computing nodes.

Overall, the rise of AI-native robotics means machines are increasingly flexible, collaborative, and upgradeable. Companies are treating robots less like fixed industrial appliances and more like smartphones or PCs – hardware that thrives on regular software improvements. This paradigm shift, evident throughout CES, promises faster innovation cycles in robotics. It also blurs the line between a “robotics company” and an “AI company,” as success requires deep integration of both. As CES organizers noted in their recap, humanoid robots and other physical AI embodiments are expanding across home, industrial, medical, and mobility applications in a way that improves safety, efficiency, and workforce resilience.

Factories Become Software Systems

A mantra echoed in Vegas this year: “the factory is the new computer.” Manufacturers are increasingly running their operations using the same playbook that tech companies use to run cloud services. That means digital twins, simulation-first design, and closed-loop learning are entering the industrial world in force. In practical terms, factories are being treated like software systems – modeled digitally before they’re built, then continuously monitored and optimized via data and AI once operational.

At CES 2026, multiple announcements drove this point home. Siemens AG’s CEO Roland Busch (in a keynote that notably featured NVIDIA’s Jensen Huang) unveiled a suite of industrial AI tools, including a Digital Twin Composer to simulate production lines at “metaverse” scale. The Siemens-NVIDIA partnership is particularly telling: they are integrating NVIDIA’s AI platforms deeply into Siemens’ factory automation software. The result? Entire manufacturing plants can essentially act as “gigantic robots”, with AI algorithms orchestrating everything from production scheduling to quality control to predictive maintenance. In other words, the factory itself becomes a robot – sensors, machines, and AI software all linked in one intelligent system. Huang described it as building an industrial AI operating system for smart factories. This approach can dramatically speed up iteration (you can virtually test a line change before touching any physical equipment) and improve efficiency (AI finds tweaks to boost output or reduce energy use in real time).

We also saw the cloudification of manufacturing in smaller ways. Startups in CES’s Eureka Park showed “factory OS” platforms that let even mid-size manufacturers tap into cloud-based AI for optimizing their workflows. Companies like Bright Machines talked up “software-defined microfactories” – modular assembly lines controlled by a central software brain, which can be quickly reprogrammed for new products like deploying a new app. And big industrial players like Siemens and Bosch emphasized how they use IIoT (Industrial Internet of Things) data and AI analytics to treat their production lines like a data-driven service.

Crucially, this fusion of software and production isn’t just theory – it’s being implemented. One example: PepsiCo’s ops team shared how they use digital twins to simulate facility upgrades in the US before rolling them out globally, reducing downtime and surprises. And on the automotive front, manufacturers are collaborating with AI firms to create “lights-out” manufacturing cells that largely run autonomously. All these developments mean future factories will have a tech stack resembling a cloud data center: with virtualization (of processes), AI-driven orchestration, and perhaps even an app-store-like ecosystem for new industrial AI plugins.

In short, manufacturing is becoming a software domain, and CES 2026 underscored that trend strongly. The phrase “AI factory” now applies both to the outputs (AI creating physical things) and the operations (AI managing how things are created). This convergence is giving early adopters huge leaps in productivity and agility. As one analysis put it, by infusing AI into industrial software used worldwide, those plants essentially become self-optimizing robots at scale. Expect this to be a defining competitive edge in the coming years.

Infrastructure Is the New Bottleneck

Ironically, as AI leaves the lab and enters the real world, the biggest challenges have become physical, not algorithmic. At CES 2026, many insiders noted that scaling up Physical AI is now less about inventing smarter AI and more about building the infrastructure to support it. In other words, the bottleneck has shifted to capital expenditure – things like factories, data centers, power grids, and supply chains. Three areas stood out:

  • Compute & Data Centers: The AI models running robots and autonomous systems require immense computing power, and that means power-hungry chips and advanced data centers. NVIDIA’s new Vera Rubin AI supercomputer platform (announced at CES) exemplifies this – it’s a beast of a system (over a thousand GPUs across 16 racks) designed to train the next generation of physical AI models. But such supercomputers need robust infrastructure: cooling, energy supply, networking. In fact, NVIDIA highlighted that Rubin can be cooled with warm water instead of industrial chillers – an engineering feat to deal with its heat output. The broader point is that deploying AI at scale (whether in cloud servers or on-prem for factories) is straining existing data center infrastructure. We’re already seeing power and cooling become limiting factors in tech hubs. The industry is responding with innovations like new memory technologies (e.g. HBM4 high-bandwidth memory) to remove bottlenecks inside servers, and governments are incentivizing data center expansion. The AI cloud behind physical AI has to grow, and fast.

  • Chip Manufacturing & Supply Chain: Physical AI can’t expand without the silicon. The global chip shortage of the past couple years made it painfully clear that our semiconductor manufacturing capacity was lagging. CES discussions frequently touched on this dependency. The good news is massive investments are underway: as of mid-2025, over $500 billion had been committed by the private sector to boost chipmaking, with plans to triple U.S. domestic production capacity by 2032. The CHIPS Act in the U.S. and similar initiatives in Europe and Asia are spurring a boom in new fabs (from Intel in Arizona to TSMC in Taiwan and Japan). This is essential infrastructure for AI – more chips (GPUs, NPUs, sensors, etc.) are needed to put intelligence into every robot, car, and gadget. But fabs take years to build and ramp. Meanwhile, big players like TSMC, Samsung, and Intel are also working on advanced packaging and cooling solutions to make more efficient AI chips. The CES spotlight on next-gen processors from Intel, AMD, Qualcomm and others all ties back to this: hardware matters. Without cutting-edge and abundant chips, the Physical AI revolution could stall. So, a significant portion of AI industry capital in the coming years is going into ensuring the supply of silicon and computing infrastructure can meet the demand of all these smart robots and devices.

  • Robotics & EV Production Facilities: Perhaps the most visible infrastructure investment highlighted was in robot manufacturing itself. In a striking announcement, Hyundai (which owns Boston Dynamics) revealed it is investing $26 billion in U.S. operations to build out an entire supply chain for humanoid robots. This includes a new factory slated to mass-produce 30,000 Atlas robots per year by 2028 – effectively an automobile-scale assembly plant, but for robots. It also includes a dedicated R&D center called the Robotics Metaplant Application Center, which Hyundai described as a “data factory” to train these robots in new skills. In other words, on top of building physical robots, they’re building infrastructure to continuously improve robots (feeding data and AI models in a closed loop). This kind of heavy investment underscores that making advanced robots at scale isn’t like making smartphones – it requires big factories, specialized components (actuators, sensors), and strong logistics for things like batteries and materials. We’re seeing similar moves in the electric vehicle (EV) space too (which overlaps with physical AI through autonomous driving): Tesla, GM, and others have poured billions into “gigafactories” for batteries and EV production, essentially retooling the infrastructure of mobility. The convergence of AI factories, data centers, and energy was a recurring theme. For instance, powering thousands of robots or fast-charging fleets of autonomous EVs will demand upgraded electrical infrastructure and grid capacity. Some CES panels even discussed sustainable energy solutions (like on-site solar + battery for factories) as part of the puzzle. The bottom line is, scaling physical AI is as much an infrastructure challenge as it is a tech challenge.

Going forward, we can expect huge capital projects to continue in these areas. The tech industry is partnering with manufacturing and energy sectors like never before. One could say AI is the new driver of capital expenditure – much like railroads or the internet boom created waves of infrastructure building in the past. CES 2026 made it clear that solving these bottlenecks (in chips, compute, and manufacturing capacity) is critical to fully realizing the Physical AI vision.

Capital Is Repricing the Physical World

Money talks, and in 2026 it’s talking about robots. Investors are rapidly shifting focus from app startups and pure software plays toward opportunities in AI-powered hardware, robotics, and the picks-and-shovels of the new AI age. At CES, this trend was evident in both formal sessions and hallway conversations among VCs, CEOs, and analysts. The phrase “repricing the physical world” captures how assets like factories, warehouses, and even infrastructure companies are being seen in a new light once you add AI into the equation. Three signs of this trend:

  • Venture Capital Flooding into Robotics and Industrial AI: Funding data from the past year or so shows an explosion of investment in “physical AI” startups. In just the first quarter of 2025, global robotics companies raised over $2.26 billion – an astonishing figure for a sector once considered niche. And it’s not just quantity, it’s quality: the rounds are large and led by top-tier investors. A marquee example is Figure AI, a U.S.-based humanoid robotics startup that secured a $675 million round, drawing support from the likes of Sam Altman (of OpenAI), Jeff Bezos, and NVIDIA. Likewise, other players like 1X Technologies (humanoids from Norway, backed by OpenAI’s fund) and Neura Robotics (Germany) have landed nine-figure investments. Such backing would have been unheard of for robotics 5 years ago. Now, investors fear missing out on the next trillion-dollar tech wave, and many believe autonomous robots could be as revolutionary (and profitable) as the personal computer or smartphone. Startups that blend strong AI software with proprietary hardware – especially in clearly monetizable domains like logistics or healthcare – are commanding premium valuations. At CES, several venture panels noted that robotics and hardware IPOs are on the horizon again (with a few aiming for 2026–27 IPO windows). In short, VC and growth capital is finally long on robotics/AI ventures, not just cloud apps.

  • Industry and Cross-Sector Investment: It’s not only venture capital; large corporations and even infrastructure investors are piling in. We’re seeing strategic M&A uptick where industrial conglomerates acquire AI robotics startups to jump-start their automation efforts. Just before CES, a notable acquisition was made by a U.S. automation firm buying a vision-guided robotics startup – a play to own the AI “brains” for their machines. At CES, Hyundai’s $26B commitment (mentioned earlier) is a prime example of a traditional company (auto manufacturing) pouring money into a new physical AI venture (robot production). Similarly, Amazon has been quietly investing in its warehouse robotics (after acquiring Kiva Systems years ago, they continue to fund internal robotics programs and external ventures). We also see government-related capital: the U.S. government and states (through initiatives like the CHIPS Act, manufacturing grants, etc.) are channeling funds into building the backbone for AI industries onshore. All this means that capital flows into physical tech – factories, robotics labs, chip fabs – are reaching levels not seen in decades. Infrastructure funds that once only invested in bridges and power plants are now considering data centers and advanced manufacturing campuses as part of their portfolio (with AI demand making those lucrative). This influx of non-VC capital is repricing physical assets; for example, a cutting-edge automated warehouse or a robot-equipped manufacturing line is valued much higher in the market (or by acquirers) than a traditional one, because of its future earning potential and efficiency.

  • New Investment Mindset: Investing in Physical AI requires a different playbook than pure software, and we heard discussion of this at CES. There’s an understanding that longer time horizons and deeper technical due diligence are needed. Unlike a mobile app that can go viral in months, a robotics company might need several years and significant capital outlay to develop a product and scale production. Investors are adjusting by being more patient and by bringing in cross-domain experts (for instance, VC firms hiring partners with manufacturing or hardware experience). The term “capex-heavy” used to scare away venture investors, but now some are embracing it – provided the upside is a defensible, world-changing technology. One VC on a CES panel quipped that “AI and robotics are the new railroads” – meaning they see parallels to the 19th-century infrastructure boom that required lots of capital but then yielded enormous economic transformation. We’re also seeing collaboration between venture capital and government funding in some cases, to de-risk these big projects (e.g., startups getting DOE or DoD grants to pursue advanced robotics, alongside VC money). The multipliers on success are huge: whoever builds the platform for, say, automating all mid-sized factories or all grocery warehouses could be the next Amazon-scale business. So investors are increasingly willing to bet big. Essentially, the physical world is back as a focus for tech investment, now that AI promises to unlock new value in it. Markets are starting to value companies that have hard assets plus AI (like semiconductor fabs, robotics manufacturers, smart logistics firms) at higher multiples, reflecting that paradigm shift.

For entrepreneurs (“builders”), this is encouraging – it means if you’re working on hard tech to bring AI into the physical realm, there is funding available and an appetite for innovation. For incumbents (“operators”), it signals that staying on the sidelines is risky; not adopting AI and robotics could mean falling behind competitors who do. And for the financial community, Physical AI is shaping up to be the next generational asset class – akin to how the internet or mobile created new titans. As one industry report noted, robotics startups that integrate AI are now securing higher valuations and faster funding cycles, and late-stage robotics firms are eyeing public markets in the next two years. The smart money is repositioning accordingly.

What This Means for Builders, Operators, and Investors

The rise of Physical AI carries different implications for various stakeholders in tech and industry. Here are the key takeaways for those building the tech, those deploying it in their operations, and those funding it:

  • Builders (Technologists & Entrepreneurs): The opportunity in Physical AI is to create full-stack systems that combine software intelligence, hardware, and services. This often means assembling interdisciplinary teams – roboticists, AI engineers, mechanical designers, and domain experts all under one roof. Unlike pure software startups, you’ll need to navigate supply chains and possibly even run manufacturing or deployment operations. The good news: the barriers to doing so are lowering. Open-source models and affordable sensors/robots (some stemming from CES releases) give startups a head start. Builders should aim to own the integration of AI + hardware + data. If you can deliver a working physical AI solution (for example, an AI-driven farming robot or an autonomous warehouse system), customers are increasingly open to pilot and adopt – they’ve seen at CES that this tech is maturing. Also, be prepared for a marathon, not a sprint. Product development cycles are longer and scaling is capital-intensive, but the flipside is once you have a proven solution, it’s a hard-to-replicate moat. Focus on real-world validation (deploy early with pilot customers) to stand out from the crowd of concepts. With major companies as well as VCs now actively scouting in this space, there’s a window of opportunity to establish yourself as a leader in an emerging Physical AI niche.

  • Operators (Industries & Businesses): For companies in sectors like manufacturing, logistics, healthcare, retail, construction – the message of CES 2026 is that AI and robotics can tangibly boost productivity and resilience, today. It’s time to start planning and experimenting with these technologies in your operations. Physical AI offers solutions to persistent challenges: labor shortages, safety risks, quality control, and throughput limitations. For example, manufacturers face a well-documented skills gap and worker shortfall – by one estimate, U.S. industries may need 3.8 million new workers by 2033, with as many as 1.9 million positions projected to go unfilled due to skill gaps. Robots and AI can help bridge this gap by taking on repetitive or strenuous tasks and augmenting the existing workforce. Operators should view Physical AI not as replacing people wholesale, but as automation of the tedious and augmentation of human workers to do higher-level jobs. Early adopters in warehousing and e-commerce (think Amazon, Walmart’s logistics arm with its highly automated distribution centers) are already reaping efficiency gains. Similarly, hospitals using AI-guided robots for routine deliveries or cleaning are freeing up staff to focus on patient care. The key for operators is to pilot early and train your organization – introduce cobots (collaborative robots) on the line, implement AI quality inspection on one product line, or use an autonomous rover for night security as a start. This both builds internal expertise and signals to your workforce that automation is aimed at empowering them, not simply cutting jobs. Change management and upskilling programs will be crucial; successful adopters invest in teaching employees to work alongside AI (e.g., robot maintenance, AI system oversight roles). In summary, businesses should prepare for a future where adaptive automation is part of their core operations – those who do will likely outperform in efficiency, flexibility, and safety. The CES mantra for operators was clear: start integrating physical AI now or risk falling behind.

  • Investors (VCs, Corporate Financiers, and Asset Managers): Physical AI represents a new frontier for investment that blends tech with heavy industry. For venture and growth investors, it means recalibrating expectations and expertise. Diligence needs to cover not just code and market TAM, but also supply chain, manufacturing plans, and regulatory hurdles. The payoff, however, is the chance to back the next generation of foundational companies – akin to investing in semiconductor giants in the 1970s or internet giants in the 1990s. Investors should be ready for longer horizons; many robotics startups won’t follow the 18-month turnaround model of SaaS. Instead, think in 5-10 year windows with staged milestones (prototype, pilot deployment, scaled production, etc.). It’s also wise to leverage partnerships – syndicates or co-investments with corporate venture arms, government programs, or even infrastructure funds can bring the needed capital and expertise. Corporate investors in particular can benefit by strategically funding startups that complement their future needs (as seen by automobile companies investing in autonomous tech firms, etc.). Moreover, investors might consider the infrastructure around physical AI as a class of its own – data centers (perhaps specialized for robotics or AV workloads), sensor manufacturers, actuator suppliers, and so on are all critical links in the chain and potential investment targets. An understanding of cross-sector dynamics (for example, how a breakthrough in battery tech could unlock mobile robot adoption, or how government policy on manufacturing can boost certain business models) will set successful investors apart. Finally, the exit landscape is shifting: we may see more IPOs and consolidations in this space as it matures. Several high-profile SPAC and IPO attempts for automation firms in recent years (some rocky, some successful like warehouse automation firm Symbotic) indicate public markets are cautiously interested. By treating Physical AI companies as long-term value creators – the way one might value a promising biotech company – investors can ride out early volatility. In essence, the smart money is viewing robots, autonomous systems, and AI-integrated infrastructure as core assets of the future economy. The valuations and capital flows are beginning to reflect that, and those who allocate intelligently now could own significant stakes in the “AI-powered industrial revolution” that’s unfolding.

Across the board, CES 2026’s message was optimistic: we are entering a new era where AI is embedded in the physical world around us. For builders, operators, and investors alike, the challenge is to adapt and engage with this shift. Those who do stand to drive extraordinary innovation – and capture the value – in the coming decade. As one CES panel succinctly put it: the last 20 years were about moving fast and breaking things in software, the next 20 will be about moving steel and bending metal with intelligence. The Physical AI era is here, and it’s time to build.

Sources

  1. TechCrunch – CES 2026 Recap (Morgan Little). “Physical AI was particularly prominent… robots demonstrated all over the show.”

  2. Interesting Engineering – 9 Humanoid Robots at CES 2026 (Kaif Shaikh). “CES 2026 marked a clear break… humanoid robots didn’t just pose… They actually worked.”

  3. The Neuron – CES 2026: The Dawn of Physical AI (Noah Edelman et al.). Coverage of NVIDIA & Google partnership, Hyundai’s Atlas factory investment, and industrial AI integrations. (Includes Jensen Huang quote about “ChatGPT moment for physical AI”.)

  4. Marion Street Capital – Robotics Investment Boom (2025) (Sean Heberling). Industry report on funding trends: “Over $2.26B in Q1 2025… Figure AI $675M raised… backed by Altman, Bezos, NVIDIA.”

  5. CES 2026 Official Press Release – “The Future is Here”. Section on Robotics as “physical AI,” outlining humanoid robots emerging as a frontier and AI-driven simulation training for robots.

  6. Manufacturing Dive – 5 Trends for 2026 (S. Zielinski). Noted manufacturing investments: “$500B+ committed to chipmaking… triple capacity by 2032”; workforce stats: “3.8M manufacturing jobs needed, 1.9M may go unfilled by 2033.”

Keep Reading