Not sure if anyone else has done this but when i was submitting my project I put my app description and concept that i put into my google form submission into several ai models to judge my concept against the Competition Rubric to give me an idea how i faired in every category.
This would make a great app to develop with RAG, basically everyone submits their app concept and the app would judge its uniqueness against the other entries and the rubric.
That’s meta! It could make the judges work easier. They have the submissions in a sheet, there’s a possibility to grade videos with Gemini and input that into the spreadsheet as well and then grade all the fields
As I am looking into some submissions which are part of the people’s choice voting I have a feeling some use OpenAI and not Gemini. It’d be cool to write bot which analyzes the source codes to determine if it is even Gemini. Then it’s still a question if a demo uses Gemini as well (like if the project was submitted to other hackathons with other LLM how do I know the demo is OpenAI or Gemini)
I also had that feeling and that’s why I think judging only from video is not enough. People also used Meta. I hope that if Google is taking time, Judges will have everything in their minds and they can quickly analyze the code. I also had a feeling that many apps are incomplete then how can they select the top 100 just from videos?
Just my opinion: It shouldn’t be a problem if someone uses other models or APIs (such as I had to use Google Chirp, Cloud Functions, etc., and I think even someone using OpenAI, Meta, etc. could be OK if and only if) if the “main” LLM is Gemini, and it’s in the center.
Where there any top 100 announcement or shortlist?