Using AI is great. But is using 2 AI’s twice as good? What about 3? Many people mistakenly believe All AIs are the same, not understanding the difference between a Large Language Model, an image based diffusion model, a sound analysis model, or a Pod Creation Module.
Each of these tools has their own value, and since this is all relatively new stuff, the APIs, UIXs and other 3 letter spaghetti terms are all mixed up.
I’ve found that due to Context window sizes where the input window, is around 4K Tokens and output window about 4K Tokens with a total AI context of 32K Tokens is amazingly powerful (Gemini 2.0 Thinking as an example) but it’s difficult to provide references and ask more than 4 hard questions with a model this size, and you get confused as both a human or AI if you keep starting over from scratch.
My current workflow that is quite amazing if I say so myself (And my Gemini AI’s apparently are programmed to agree with me!) is to have a 2M token window Gemini 1.5 which seems to be slightly more powerful than 2.0 Flash, used to create a workflow, and guide the prompts and review the progress, and then instantiate several 2.0 Thinking (32K Token Limit).
The output is amazingly powerful and I’ve been able to get work that was frankly humanly impossible due to the advanced math and physics done.
As a test I had it review Hard Sphere Gases with freezing, which is a fun topic that took me about a month to study and feel I had a grasp on. This supposes you have billiard ball like objects that collide in 3D space without gravity or any other attractive force, and simulate the behavior. The brownian motion in 3D creates behaviors that quite complex and interesting to study. What took me a month a few years ago (I might have been smarter then though,) took less than an hour using this workflow, which was helpful both as a test for the workflow and to refamiliarize myself with this arcane topic.
I’ve now used up my daily allotment of tokens having it do much more complex tasks for a 3D Scleronomic formulism, in a Phase Space paradigm using Geometric Algebra to pose some reformulations of 150 years of high level physics with a new paradigm.
Normally this is done with one crackpot professor and a dozen unsuspecting grad students. It takes five years on average before the grad students wise up and threaten to leave, and work gets published, to be reviewed by others. That is the ‘scientific’ method.
Thanks to the AI’s and the ability to instantiate a few dozen without needing pizza or stipends, I’m able to get my crackpot theory done in record time.
Then I can have another AI from outside the Google Universe, provide critique, which is actually quite impressive, although painful when it points out flaws in my theory.
After a break I can then address these issues.
This iterative cycle process is taking days to do what historically took years, and frankly in the current scientific community stopped happening a few dozen years ago, as people can publish the same unread paper their entire career before they retire and admit that nobody ever read their stuff.
I can create stuff that won’t be read in days, review it with unbiased inhumanly fast reviewers, and iterate until even the outside AI’s admit begrudingly that I might be on to something.
Eventually I’ll have to find some humans, but until then I’ll wait until my timer resets, or use the Gemini Advanced which is great, but limited compared to the AI Studio it seems in ways that aren’t clear to me.
Let me know if you have questions about this workflow, or want to read a Theory of Everything that cheats, but starting with stuff learned by 1000 people smarter than me, and asks “What If?” with a head start over smart people who say, “everyone knows.”