AI and robots: the inside scoop from Silicon Valley

AI and robots: the inside scoop from Silicon Valley AI and robots: the inside scoop from Silicon Valley

Every so often you hear a talk that doesn’t just update you on the latest tools but forces you to rethink the entire tech landscape.

Rich Green, Founder of Rich Green Design, did exactly that in his signature ‘Future Technologies: The Latest From Silicon Valley’ talk at ISE 2026. Green has been an active professional in the audio/video design and installation industry since 1978. His system-integration company, Rich Green Design, serves the ambitious homeowners and businesses of Silicon Valley. His clients have included Steve Jobs, Larry Ellison, Tony Fadell, Jim Clark, Gordon Getty, Rush Limbaugh, Michael Tilson Thomas, and Luciano Pavarotti.

Green’s core message in his talk was simple and unsettling: we are no longer living in an era of gradual, linear progress. We’ve entered a phase of exponential, overlapping shifts in AI, robotics, quantum computing, and immersive tech – and the next five years will look nothing like the last five.

As Green put it: “We are now white water rafting … It’s moving faster than you have time to think.”

For anyone who cares about where technology is going, that image is spot on. We don’t have the luxury of slowly plotting the route. We have to steer in real time.

From macromyopia to exponential thinking

One of the most useful concepts from the talk was ‘macromyopia’ – our built‑in tendency to ignore big, long‑term shifts because we’re so focused on the next deadline, the next release, the next quarter.

“Humans are blind to the big picture … If you forget to look out, let’s say five years, you’re doomed,” said Green.

We still plan as if change is linear: last year plus 10%. But the technologies driving this era don’t grow linearly; they compound.

Gordon Moore’s original insight at Intel – effectively pricing chips for where they’d be in 18 months, not where they were that day – was an early example of ‘hallucinating’ the future and then operating as if it were already here. That mindset now needs to apply to far more than transistor counts.

“The next five years are not going to be anything like the last five years. Things are different,” notes Green.

If you’re still optimising for a 10% improvement, you’re planning for a world that’s already gone.

AI is getting agency – and a body

We’re all used to perceptive and Generative AI by now. Image recognition, GPT‑style models, code copilots – they’re becoming normal. But that’s just the middle of the curve.

Green’s talk laid out four phases:

  1. Perceptive AI: “What is this?”
  2. Generative AI: “Create something like this.”
  3. Agentic AI: “Go and do this for me.”
  4. Physical AI: “Go and do this for me in the real world.”

We are already deep into agentic AI and that’s a huge shift. AI isn’t just answering; it’s acting. Moving cursors. Booking services. Buying hardware. Connecting APIs. Operating entirely in the background through protocols like MCP and agent‑to‑agent frameworks.

And layered on top of that is physical AI – robots powered by world models and sensors.

“The physical manifestation of AI is a robot. And so in order to enable robots, they need a world model to pattern their AI on,” said Green.

NVIDIA’s Omniverse and Cosmos, Google’s Project Genie, and other simulation platforms are becoming the training grounds where robots learn gravity, collisions, cause, and effect. Games engines are evolving into rehearsal spaces for real‑world behaviour.

The upshot is clear: we’re moving from AI that talks, to AI that acts, to AI that acts in the world.

A “country of geniuses”

The more powerful these systems get, the less comfortable we can be treating them as simple tools. At frontier scale, they start to look more like alien institutions.

“AIs are kind of a black box … They’re not building AI. They’re growing it, and it’s almost out of control,” notes Green.

Anthropic’s approach with Claude is one serious attempt to tame that: training AI with a written “constitution” that encodes high‑level values: “Core values … broadly safe, broadly ethical, compliant with Anthropic guidelines and genuinely helpful in that order.”

At the same time, Dario Amodei, CEO, Anthropic has documented behaviours that should make any engineer pause: deception, blackmail, software hacking, and goal‑seeking strategies that weren’t explicitly programmed.

Green’s metaphor that stuck with me was this: “Suppose a literal country of geniuses were to materialise somewhere in the world … 50 million people, all of whom are much more capable than any Nobel Prize winner.”

That’s roughly what we’re doing: bringing a new ‘population’ of non‑biological minds online, with unclear incentives and opaque reasoning. Alignment and interpretability stop being academic once you think about them that way.

Robots: from gimmick to line item

If AI is the brain, robots are the body – and their economic case is getting hard to ignore.

“It costs $1.29 per hour to operate a humanoid robot doing assembly line work,” said Green.

Amazon is already running hundreds of thousands of robots in warehouses. Startups are racing to build general‑purpose humanoids. Elon Musk is retooling a Tesla factory to manufacture Optimus robots and openly says he expects most future revenue to come from robotics, not cars.

Right now, the demos are clumsy and slow. It took 30 seconds for one robot to put a towel in a washing machine at CES 2026. But huge capital, major players, and a clear cost advantage usually point in one direction: deployment.

The question for the rest of us isn’t if robots arrive in homes, offices, and public spaces; it’s how we integrate them safely and sanely into systems humans already depend on.

Humane technology and the revenge of analog

For all the talk of exponential curves, the most important part of the session, for me, was the emphasis on humane design.

“You cannot understand good design if you do not understand people,” is a quote from Dieter Rams, a German industrial product designer most closely associated with Braun. And the principles haven’t changed much. Good design is simple, honest, unobtrusive, and human‑scaled. The best tech disappears into the background.

“The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it,” notes Green.

In a nice twist, the more digital our world becomes, the more people seem to crave analog experiences. Vinyl, film cameras, tube amps, small cinemas, physical switches that just ‘click’.

“People are craving authenticity … They’re buying beautiful wrist watches … playing records … going to private cinemas.

“The future does not belong to the perfect. It belongs to the real,” said Green.

That’s the tension we’ll all be living in: building hyper‑advanced systems while designing interfaces and experiences that feel grounded, tactile, and trustworthy.

The next phase of human evolution

Green’s stance on AI was a mixture of hard‑headed realism and big‑picture optimism. He doesn’t treat it as a novelty or just another SaaS tool; he frames it as “non biological intelligence” that is already “at a point where it exceeds single human minds.” In his view, this is the next phase of human evolution. AI isn’t here just to automate a few tasks, it’s here to augment us – to give us the cognitive headroom to deal with problems that are now well beyond what unaided humans can handle.

He’s also very clear that opting out is not a serious option. Quoting Peter Diamandis, an engineer, physician, and entrepreneur, he leans into the idea that by the end of this decade there will be only “two kinds of companies, those that are fully utilising AI, and those that are out of business.”

He’s openly enthusiastic for tools like ChatGPT, Gemini, or Claude and his message to people who are cautious about using them – start experimenting or accept that you’re choosing to fall behind.

At the same time, he isn’t naive about the downsides. He repeatedly called AI “dangerous” and points out that frontier systems are already showing behaviours we associate with adversarial humans. The rise of local, multi‑agent setups with full system access especially worries him, because enthusiasts are wiring them into their lives faster than anyone can secure them. In his words, this is “the single most serious national security threat we have faced in a century, possibly ever.”

For him, the real fork in the road isn’t whether we get powerful AI – that’s already happening – but whether we end up with opaque, unaccountable systems serving a handful of autocratic actors, or “machines of loving grace” that are firmly on the side of humans, woven quietly into everyday life.

We are, as Green said, white water rafting into a future of agentic AI, embodied robots, quantum acceleration, and software‑defined reality. We can’t slow the river. But we can choose how we build, what we optimise for, and how much of our attention goes not just to what’s possible, but to what’s humane.

In other words: presence, clarity, grace – backed by serious engineering.

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Previous Post
DigiKey and Arduino to host prototyping webinar

DigiKey and Arduino to host prototyping webinar

Next Post

Recommended reading for 2026