How AI Will Revolutionize Hardware: Vision Beyond new Industry

Colleagues, I’ve been delving into the capabilities of new AI models, particularly those like Gemini, and I believe their potential in the hardware domain is truly transformative, and perhaps even underestimated at present.

Currently, most AI models have limitations in understanding circuit schematics. They tend to treat circuits as visual patterns rather than abstract representations based on electrical principles. This directly impacts their ability to perform circuit debugging (debug) based on real-world lab conditions (e.g., oscilloscope waveforms, images of actual circuits).

However, the new generation of models, exemplified by Gemini, exhibit capabilities approaching those of experienced hardware engineers: They can interpret schematics, component specifications, pin definitions, and other critical information, and then offer iterative optimization suggestions for both circuits and overall hardware designs based on experimental results.

I believe this capability is epoch-making; it will reshape the entire process from conceptual design to physical product realization.

Reconstructing the Hardware Industry Value Chain
As for Google’s investment department, I think it is time to start taking action, acquiring upstream and downstream manufacturers, reshaping the entire industrial chain, and transforming the structure and process of the entire industry.

Let’s imagine the future of hardware development:

  1. From Idea to Prototype: Traditional product research, user analysis, cost and manufacturing cycle evaluations can all be efficiently handled by AI models.

  2. Virtual and Physical Convergence: AI can drive CAD software to generate virtual 3D samples and rapidly manufacture physical prototypes using 3D printing (or 6-axis CNC machining centers to directly machine metal enclosures).

  3. Automated Production: Combining automated PCB printing, robotic assembly, and other technologies to quickly produce engineering samples.

  4. Intelligent Testing and Iteration: AI-powered testing platforms, automatically built based on physical and industry knowledge bases, can comprehensively test samples and feed the results back to the design phase for multiple rounds of iterative optimization.

Advantages of AI-Driven Hardware Development

Compared to traditional hardware R&D models, this new AI-driven approach offers significant advantages:

  • Accelerated Iteration: No longer limited by human resources, multiple designs and tests can be conducted in parallel, significantly shortening development cycles.

  • Adversarial Optimization: By setting constraints and reward mechanisms, AI agents can monitor the results of prototyping and engage in “adversarial competition,” leading to superior designs.

  • Exhaustive Optimization: In specific areas, such as fluid dynamics design, AI can automatically exhaust all possibilities, quickly generating a large number of prototypes for testing (e.g., wind tunnel testing), without the need for designers to manually experiment.

  • Multi-Scenario Reuse: By designing reusable hardware platforms, coupled with different software and enclosures, the needs of various application scenarios (e.g., home, supermarket, restaurant, hotel) can be met.

  • Extreme Optimization: In large-scale production, AI can optimize key processes like mold manufacturing, improving production efficiency and product yield.

  • Complex System Design: Problems that previously required complex modeling can now be directly handed over to AI, allowing it to autonomously learn and optimize within constraints. Even if the process is a “black box,” the results are usable.

Breaking Through Existing Bottlenecks

Of course, to realize this vision, several challenges need to be addressed:

  • Supply Chain Integration: It’s necessary to connect upstream and downstream companies to achieve full-process automation and data sharing.

  • Standardization of Production Processes: Currently, there are differences in equipment and processes between small-batch trial production and large-scale mass production, requiring further integration.
    AI-assisted production lines would use the small-batch production to be used directly for large-batch production, reducing costs and time.

Beyond “Industry 4.0”

It’s crucial to emphasize that AI here is not merely a supporting tool for “Industry 4.0,” but rather a core driving force, akin to a human expert. It should not be limited to 3D modeling and optimization but should possess a deep understanding of the principles of each step in the entire production process, the relationship between sensor data and changes in the physical world.
This is like the future of image recognition: not just simple object detection (like YOLO), but achieving a true understanding of object properties (like distinguishing between strawberries and pears).

I believe that new-generation AI models like Gemini have the potential to completely reshape manufacturing, just as AI has already disrupted the field of drug discovery. This is not just an improvement in production efficiency, but a revolution in the entire industry model.

1 Like

Here I do not mean to dictate what Google can or cannot make, but as an observer of Software & IoT products, I think it would be very revolutionary if in the next few years Google could sell integrated services “IoT + AI google + realworld service”, such as :

  1. Modular ASSISTANT ROBOT :

This Modular Assistant robot can be shaped like a dog, human, drone, etc. where a robot that has high intelligence must have a lot of functions, including: simply if a hand or basket is added to the Modular, it can help mothers carry routine shopping items, it can also be a baby basket, then if the Modular is added with several cameras and radars, it can provide security assistance for users both at home and outside the home, in situations where users are in a natural disaster or are victims of crime, the robot can help spread warning signals or provide incident data to the police / medical / others, and many other functions, please fill in yourself ..

like automotive companies in various parts of the world, Google must provide repair-shop and after-sales services for these robots, to develop Branches of shops and repair-shop in various parts of the world, so Google must cooperate with various leading repair-shop/robot/automotive companies in the world

  1. CLINIC :

it would be great if Google could produce an “AI patient medical record data manager” that is integrated with various world health service applications (such as: zocdoc, teladoc, practo, healthtap, doctolib, livi, etc.), so that every doctor around the world can provide health services faster and more accurately based on information obtained from the “AI patient medical record data manager” .. the challenge of course is that each person’s medical record data must be inputted and updated to the “AI patient medical record data manager” manually, both by patients who have had a medical checkup and by health workers

it would be better if Google had its own Health Service Application that is integrated with its Clinic and Pharmacy branches
It would be great if MRI (Magnetic Resonance Imaging) machines and Laboratory Information Systems (LIS) combined with AI were not only available in big hospitals, but were also available in various clinics around the world

  1. Modular AGRICULTURAL ROBOT :

it would be great if Google could produce “Integrated agricultural power management robots”, starting with grass-clearing robots, soil cultivators, seed planters, plant fertilizers, pest controllers, to harvest collectors .. the challenge of course that robots must be available in a variety of agricultural land scale options, such as small, medium, and large scale .. to understand that these agricultural robots do not have to be shaped like in the Transformer/Terminator movie, but in reality these modular agricultural robots can be in the form : weed removers, drones, tractors, wheeled, legged, armed, clawed, harvesting cars, and so on..

like automotive companies in various parts of the world, Google must provide repair-shop and after-sales services for these robots, to develop branches of shops and repair-shop in various parts of the world, Google must cooperate with various leading repair-shop/robot/automotive companies in the world

  1. ROBOT ARM :

manufacturers already have it, such as microchip companies, automotive companies, and others.. is Google not interested to have one ?

1 Like

Thank you very much for your kindly reply.
Because I have built some electronics and modeled some control systems over the years, I understand your point of view very well.

  1. Modular ASSISTANT ROBOT
    Maybe I am pessimistic about Google’s entry into this industry, because to realize these functions, the underlying hardware control system is the hardware control system. Because when I was in college, I worked on control systems, Ball&Plate, and the actual implementation and parameter adjustment of control models such as PID and phase lead. The application of these control systems is more in the industrial and defense directions, and companies such as SpaceX or Boston Dynamics are more suitable for this direction.
    As I mentioned in other posts, the signal solution and recognition of cameras or millimeter-wave radars are not suitable for large AI models such as Gemini developed by Google, because the speed is too slow and the token consumption is huge. It is a better way to use Google’s code tools to assist in the development of image recognition systems similar to those of companies such as Halcomm.

2.CLINIC :

In one sentence, it is because of policy, not technology.
In fact, Google has really tried to enter this industry. Google did not start working in the medical field recently. When I visited my friend at Google in San Jose at the end of 2018, he refused to answer me whether Google had made AI models in the direction of medical imaging. In fact, refusal is also a kind of answer:)
Because I worked in a medical industry investment company from 2016 to 2020, I have interviewed and investigated many AI medical startups. In fact, Google may not be the first to do this. IBM’s Waston project was earlier. At that time, Waston was also better in the accuracy of X-ray images.
In fact, the import, formatting, content understanding, blood test report database, and image recognition of medical records are not difficult. The hottest era of medical image recognition was actually around 2018. At that time, I even went to due diligence on startups that did tongue coating image recognition.
The real problem is policy. Siemens equipment has very accurate AI tools, but many countries are restricted.
If you are interested in building this medical system yourself, using Google’s notebook LLM is actually enough. This tool is much better than the projects of startups in previous years.

3.Modular AGRICULTURAL ROBOT :
Google actually acquired an autonomous driving company many years ago.
The “Integrated agricultural power management robots” you need is actually an autonomous driving vehicle plus automated agricultural equipment.
Autonomous driving on roads has been achieved, but autonomous driving in farmland, mountains, and other places still needs time to stabilize. In particular, it is necessary to judge whether the ground will get stuck.
In terms of sensors, Google’s technology is much weaker than other Silicon Valley companies, especially the hyperspectral imaging system, torque sensor, mechanical control system, etc. that agriculture needs most.
In fact, many startups are developing automated agricultural equipment or mining equipment, but the actual performance is not satisfactory. In the actual harsh working environment and changeable climate environment, the performance is far inferior to that of humans plus hydraulic machinery.

4.ROBOT ARM :

Just like I mentioned Google’s investment department in other posts, my opinion is that Google’s investment department is too conservative and not radical at all.
Since coding can be fully automated, hardware research and development and manufacturing can actually be done as well. Based on the current technology, it is enough. We just need more people to try and improve it.
For example, Google has a quantum computer, but it has not used this technology to make mass-produced quantum technology sensors. From the perspective of the technology roadmap, the technologies of these two industries are basically the same. In fact, my friends at Caltech, who majored in quantum, started a startup company after graduation that first made quantum accelerometers and then turned to quantum computers.
Google’s investment department is more like an extension of other departments of an Internet company, rather than an investment company that thinks independently.
Therefore, the extension and execution unit of AI in the physical world, such as the robotic arm, may not be something that Google has spent much effort to pay attention to and practice. For example, the most obvious use of MCP to call matlab and then call the PID controller for parameter adjustment is an AI application that I have tried. It has very obvious industry applications, but Google does not seem to pay much attention to it. Because acquiring related upstream and downstream and redesigning and optimizing them, especially the interface grafting with AI, is a very certain technical route.
:grinning: