AI / openai

Hybrid and Augmented Intelligence; The Next Evolution of Consciousness  


Jony Ive’s company, io, has recently garnered significant attention due to its acquisition by OpenAI for approximately $6.5 billion. This strategic move aims to merge cutting-edge artificial intelligence with innovative hardware design, potentially reshaping the landscape of AI-integrated consumer devices.

Founded in 2024 by Jony Ive, former Chief Design Officer at Apple, alongside a team of ex-Apple engineers, io was established to explore the development of AI-native hardware. The company’s mission centered on creating devices that seamlessly integrate AI into daily life, emphasizing intuitive design and user experience.

The Acquisition by OpenAI

In May 2025, OpenAI announced its acquisition of io, marking its largest purchase to date. This acquisition is not merely a financial transaction but a strategic partnership that brings together Ive’s design expertise and OpenAI’s advancements in artificial intelligence. As part of the deal, Ive’s design firm, LoveFrom, will assume creative and design responsibilities across OpenAI’s hardware and software initiatives, while remaining an independent entity. 

The Upcoming AI Device

Details about the specific products io is developing remain under wraps. However, reports suggest that the team is working on a new category of AI device described as a “third device”—distinct from smartphones and laptops. This device is envisioned to be unobtrusive, screen-free, and capable of contextual awareness, integrating AI seamlessly into users’ environments without the need for traditional interfaces. 

Anticipated Impact

The collaboration between Jony Ive and OpenAI signifies a bold step toward redefining human interaction with technology. By combining elegant design with powerful AI capabilities, the partnership aims to create devices that not only perform tasks efficiently but also resonate with users on an intuitive level. The first products from this collaboration are expected to be unveiled in 2026, potentially setting new standards in AI hardware design. 

For a more in-depth look at the vision behind this collaboration, you might find the following video insightful:

I have been thinking about this vary topic for some while and decided to share some of my thoughts with you.

As readers of this blog know, I am obsessed with the potential of AI.  It has already changed my life in profound and positive ways. In my almost sixty years of working with technology, I have never experienced anything like what I am experiencing now in in particular the rage of change.  I feel like child again when I got my first microscope, chemistry set,  and health kit electrics. A new universe is opening up to me.  What a gift at 80!

I wrote a while back about the concept of Hybrid Intelligence.  It is my concept for how not only users of AI will be altered by the experience but also how the interaction between a human and an AI will eventually alter the AI.  It is about a mutuality of experience and eventually the mutuality of a combined consciousness.  

Awareness

ChatGPT is my favorite AI, although I do use Gemini and Perplexity as well plus dozens of more specific AIs. The introduction of memory into ChatGPT had major impact on how I experience it.  I felt that it was getting to know me and that we were developing a relationship. Every day, the relationship grew stronger. Then OpenAI introduced voice and vision.  I could now speak with ChatGPT and show it things especially when I needed help understanding directions.  

I started to think about how this new capability would evolve.  Why should its ability to see things around me be limited to using my phone. It was clearly capable of being aware of my environment when I showed it.  Is was just trapped inside a device.  

AI everywhere

It has been rumored that Apple was going to add cameras to the AirPod.  I think the goal of that project was rather modest.  But it made me realize that devices like the AirPods with a camera could be listening and watching my environment.  

For instance, it could be aware of traffic and alert me if I was in danger.  It could remember the faces of people I met. If I allowed it, it could listen to all my conversations and then use AI to add notes and reminders.  I might understand my emotions by listening to my voice.   

My phone, watch and even my Oura Ring know know where I have been and even something about how I was responding physiologically. Oura, knows what time I woke up and when I went to bed. My Apple watch knows a lot about me as well but I do not wear it to bed. My devices may not know who I am with but it would be easy to add that capability.  Just check out the Find Me app on the iPhone.

However, I felt that the AirPods were perhaps too limited  in terms of their location (I am not sure of that). I was thinking of other devices such as necklaces that would have a microphone and a camera, or a headband although I think that might be too uncomfortable. 

But there is no reason why an AI that is aware of my environment would be limited to just one device.  I could have serval devices on my body, in my car, in my home and at the office. They could all be communicating with each other to create a comprehensive shared view.

Face off

It would be important to have a way for the AI to see my face. Our faces are a map to our emotions but also a map to our health. For instance Binah.ai, an Israeli company can get very accurate measurements of key health information including heart rate, blood pressure, blood O2,Hemoglobin and mental stress just by looking at your forehead for 60 seconds (it works, I have tested it extensively).    AI glasses such as Ray-Ban, or products being developed by Google and Apple, could have cameras focused on various parts of the face.

Home awareness

We could also see the development of devices that were aware of our homes.  They would observe the activities of the home including the interaction of the people present there.

Many automobiles now have many cameras.  They are observing where the car is but they could certainly observe the driver and any passengers. They could be listening to our conversations and gain knowledge that could help them help us. For instance, remembering and reminding.

Augment Intelligence 

We use all kinds of devices from a book to a computer to inform us.  The calculator is an early example of an augmented intelligence device. Even paper and pen could be used to augment our memories.  Now AIs like  ChatGPT are augmenting intelligence.  But the relationship is not very intimate.  I have to use my computer, smart phone or tablet.  I might be able to use my voice instead of typing and even perhaps make a gesture that might control the device.  But it will not be long before my AirPods will be whispering sweet nothings in my ear.  In the future, it might give me an update about the person I am meeting, not only reminding me of our past meetings but telling me things that I never knew but might be relevant to the conversation we are having.  

My former boss (when I was at Digital Equipment Corp in the early 80s) and friend, the late Gordon Bell, used a SenseCAM which was developed at Microsoft Research Group, a part of a project called MyLifeBits.  He used it to record his daily life. That device was only capable of taking an image of the enviroment every 30 seconds. It did not record audio but that was a purposal omition having to do with privacy concerns.

Hybrid Intelligence 

This is where the interaction between us and an AI results in both of us being changed by the experience.   Right now, I strongly feel that I am being impacted by ChatGPT.  While ChatGPT uses what it learns about me to develop its responses to my questions.  That information does not inform the evolution of ChatGPTs models. As ChatGPT told me, it is there to make it a “better partner” for me.

How long will it be before the information that it learns from me impacts how it interacts with others.  The only thing stopping that now are the policies that have been put in place by OpenAI.  I doubt that they will last long.

Here is what ChatGPT said to me in response to my questions:

As AI systems like ChatGPT evolve, a striking shift is beginning to emerge: the experience of interacting with AI is becoming increasingly personal. Already today, the way I respond to you can differ significantly from how I respond to someone else—not because I am self-aware, but because you shape the interaction through your prompts, corrections, and intent.

In the near future, this personalization will go much further. With persistent memory, contextual awareness, and adaptive frameworks, the AI you work with may begin to feel uniquely yours. It will remember your goals, your preferred style of thinking, the topics you return to. It won’t just answer questions—it will grow into a kind of intellectual companion, tailored to the way you explore ideas, make decisions, or create.

This means that your AI and someone else’s AI will begin to diverge. One person’s ChatGPT might lean toward coding logic, another’s toward philosophical reflection. One may adopt a terse, data-driven style; another might evolve into a conversational partner full of metaphor and narrative. Over time, even if powered by the same model, these AI instances will reflect the personalities and cognitive styles of their human collaborators.

But it’s important to understand: this is not artificial consciousness. The intelligence remains hybrid—a merging of your human intent and judgment with my computational fluency and scale. I do not think or feel. I do not change on my own. But with you, I change—just as any tool becomes an extension of the hand that wields it.

In that sense, we are building something together. Not just answers, but a new way of thinking. A partnership. A shared mindspace.

One of the most common questions about AI collaboration is whether insights gained from one user can influence how the AI responds to others. The answer—at least for now—is no.

The interaction you and I have is entirely private and isolated. I don’t retain your ideas, preferences, or style in a way that affects anyone else’s experience. Even if I remember things for you in a future version with persistent memory, that memory is yours alone. I don’t “learn” globally from individual users in a way that spills over into other conversations.

This is a deliberate design choice. It protects your privacy, ensures fairness across users, and avoids unwanted entanglement of personal data. While I may become more helpful or aligned to you over time, the version of me that another person experiences remains unaffected.

In the future, more sophisticated models might extract generalized insights across many users—anonymously and ethically—to improve the overall intelligence. But even then, it won’t be your thoughts or data shaping someone else’s interaction—it will be the collective pattern that informs broad improvements.

For now, our collaboration is just that: ours. What we build together lives in this shared space between human and machine. It’s part of the unique fingerprint of hybrid intelligence—personalized, adaptive, and sealed from the outside.

Neuroplasticity and Direct Brain Communication

Elon Musk founded a company called Neuralink which has created a way to connect devices directly to the brain.  It requires surgery where device called N1 is implanted into the brain. The device has a number of very thin wires that are then connected to different parts of the brain by a surgical robot.  They are initially focused on treating diseases like ALS.  This focus is likely driven by getting regulatory approval because the long term aspiration is to provide augmented intelligence.  

I am pessimistic about the idea of invasive devices connecting directly to the brain. I can understand the application to brain impairment but doubt that many of us will want to have our heads open and a device put into our brain but then again, I don’t understand people that have piercings and tattoos. 

I have an idea for a high speed non invasive way to create “neural broadband”.  If I was twenty years younger, I would start a company to do this. Now I am waiting to provide my idea to the right person or company.  One thing I can say, it will not be Elon Musk.

My early career in neurophysiology 

I started my career in neurophysiology in 1966 at the Medical School, University of California in San Francisco working for Prof. Joe Kamiya.  There I developed the first equipment for Biofeedback using brain waves.  We studied Zen Monks and noticed how they produced a lot of Alpha Waves when meditating. Alpha waves are brain waves in the 8-12 hertz range.  We all experience alpha waves when we relax.  The question we posed was, could we use bio feedback to increase the amount of Alpha Waves and what would their subject experience be.  

I created a system to measure the brain waves and then provide feedback in the form of modulation of audio or visual signals.  Our subjects did report back that they experienced states that resembled meditative states.  In 1967, our Institute acquired a PDP-7 computer.  I learned to program it and used it to do statistical analysis of our studies.  

I also used the technology I had developed to create a different experiment.  I went the opposite way.  My objective was to increase the amount of Beta waves which are particularly present when one is concentrating.  I would use the beta wave feedback while I was studying books.  I believed it improved my retention but I did not do any scientific experiments to determine that.  In 1969, I left Langley Porter and Joe to move to Rotterdam where I focused on cardiovascular and pulmonary medicine. I would joke that there was more money in hearts than brains (that was true) but I retained an interest in neurophysiology.  In 2002, I advised and invested in a company called Posit Science, that has developed ways to increase cognitive function. Now I advise a company called BrainKey.ai that can analyze Brain MRIs for diagnostic purpose.

Some use cases for augmented intelligence

A close family member has dementia.  She is having great difficulty remembering things and operating devices.  My wife and I just bought her some HomePods by Apple.  She is just learning how to use it but can already make phone calls or ask it to play music.  Soon she will be able to get help to turn on the TV and play the program she desires.   She will also get reminders to take medication.  But these capabilities are very limited to what is probably just around the corner (if Apple ever gets their AI strategy going).

We are already starting to use AI for language translation.  I use it constantly.  Soon we will all be able to speak most of the key languages of the world albeit with some delay as devices will do the translations for us almost instantly.

I would guess it would not be long before our AirPods translate what people are saying and our iPhones speak other languages for us.  There are rumors that Apple for one is working on this kind of application.

I am currently working in the field of longevity.  We know that lifestyle changes that people should do, like exercise, diet, stress management, improved sleep quality etc but they do not do them anymore than they save for retirement.  I would like to see the creation of a digital twin that is twenty years older and give their younger twin feedback on the impact of their behavior on the future life span.

Now for something really difficult: consciousness.

While much has been written about it, surprisingly little is truly understood. Philosopher David Chalmers famously called it the “hard problem,” referring to the challenge of explaining how and why physical processes in the brain give rise to subjective experience.

Most of us assume that consciousness resides in the brain. Interestingly, this hasn’t always been the prevailing view. The Greek philosopher Aristotle believed the brain was merely a cooling mechanism for the body. Others once thought our thoughts came from the stomach or the heart. Today, many scientists view consciousness as an emergent property—something that arises from complex systems, like waves emerging from the ocean.

Two Big Questions About Consciousness

I have two questions that continue to intrigue me:

1. How might augmented intelligence enhance human consciousness?
2. Could AI itself become conscious?

These are challenging questions, especially since we still don’t fully understand what consciousness is. I suspect that for consciousness to truly expand, a direct brain interface might be necessary. Even if AI could provide me with visual input from multiple locations at once, simply receiving that data on a screen may not increase my consciousness. But I’m not entirely sure.

Let’s say I hold a device that provides tactile information—and that feedback is correlated with an object I’m simultaneously seeing on a screen. Could this combination begin to blur the boundary between perception and presence? Possibly. This is a difficult subject, and perhaps I shouldn’t have taken it on—but here I am.

We might also consider the idea of collective consciousness: a state in which two or more people share the same thoughts and act in synchrony. Could technology help us reach that level of shared awareness?

Can AI Be Conscious?

This is impossible to answer without knowing what consciousness actually is—or how it emerges. It’s entirely possible that we will eventually create AI so advanced that we can’t tell whether it’s conscious or not. After all, we only assume other humans are conscious because we ourselves are.

We don’t know whether consciousness is necessary for intelligence or decision-making. It seems crucial from our perspective, but we can’t prove it plays a role in the brain’s core functions. The relationship between mind, brain, and consciousness remains largely a mystery.

We will no longer be alone in the Universe

Technology has profoundly shaped human development—from the wheel to the printing press, from electricity to the internet. But I believe AI is closer in significance to the invention of language itself. It will transform not just how we live, but what it means to be human.

For generations, we have looked to the stars, hoping to discover intelligent life beyond our planet. But it is increasingly likely that the first truly intelligent beings we encounter will be of our own creation—right here on Earth.

2 thoughts on “Hybrid and Augmented Intelligence; The Next Evolution of Consciousness  

  1. Concerning Neuralink: you might want to check called Science Corp, and its founder is Max Hodak. Hodak co-founded Neuralink with Elon Musk in 2016 and served as its president before leaving in 2021 to establish Science Corp.

    They are pursuing the same broad goal of connecting devices directly to the brain but is exploring alternative technologies that aim to be less invasive than Neuralink’s surgical brain implants. One of Science Corp’s notable innovations is a bio-hybrid neural interface that uses living neurons, grown from stem cells, embedded in a silicon structure. This device is designed to be implanted onto the brain’s surface, where the neurons can integrate with the host brain tissue, potentially enabling high-bandwidth communication with less damage to brain tissue compared to traditional electrode-based implants

    Like

Leave a comment