Announcing the Keras Community Prize, first edition!

The Keras team completed the prize judging process. We had a lot of fun going through the entries and discussing them. We were really impressed with the creativity of the participants and the effort they put into their entries. We extend our warmest thanks to all participants!

Here are the results:

Winner (5k): Prompt to Prompt Editing by @Miguel_Calado

The ability to do text-based image editing is definitely one of the most useful applications of generative image models. This project is an implementation of the paper “Prompt-to-Prompt Image Editing with Cross Attention Control”. The code quality is outstanding and the generation results are excellent. The project is also very well presented, with extensive explanations and clear code examples. We really enjoyed reviewing this one!

Winner (2k): Fine-tuning Stable Diffusion by @Sayak_Paul and @deep-diver

For those with enough GPU cycles (and memory!) to pull it off, fine-tuning Stable Diffusion on your own dataset is one of the most practically useful things you can do with the model. The code quality in this project is excellent and the generation quality is quite decent (and would presumably improve with further training). A lot of work clearly went into this project and we expect it to be broadly useful for the generative Keras ecosystem!

Winner (2k): Weather live cam by @avocardio

This project lets you generate a “live” view of the current weather in any German city. We could see it being practically useful as a way to generate an expressive background image for a weather app – “here’s what the current weather looks like in your city” – without requiring an actual livecam. It’s a clever and very original idea, the contents are very well presented, with clear explanations and visuals and excellent code quality. Great example of sophisticated integration between different simple components to create something cool!

Honorable mention: Text to 3D Point Cloud by @Jobayer

This project enables you to go from a text prompt to a 3D point cloud of the corresponding object, by pipelining together Stable Diffusion and an image-to-pointcloud model. It’s a highly original idea that we see as being potentially very useful for 3D asset prototyping. The project has very high quality code and is very readable. (We recommend adding more text explanations and comments (in particular an introduction to present the use case and the method used) to improve discoverability and readability.)

Honorable mention: Morse diffusion by @Heman_B

This project enables you to dictate a generative prompt in Morse code via head movements, which can be leveraged with people who are both unable to use a keyboard and to use their voice due to motor impairments. It’s a very original project that we found to be well executed and well presented. It uses the Mediapipe Facemesh model to go from a webcam feed to a Morse string, then feeds it into Stable Diffusion. A lot of effort clearly went into this idea, and the code was pleasant to read.


Congrats to the winners!

We will be in touch with the three winners. Again, thanks a lot to everyone for participating, we really enjoyed reading through all of the entries!

11 Likes