Is a file too big for the AI to convert

I am using a database around 80000Rx60C, but seems like forever to Convert the file, I open the tab for 1hr straight and when I come back, it still converting. Other than that, if I retry, it throw me the error than I got the “Rd”, “Sd” error. I don’t know what to do at this point.

1 Like

Welcome to the forum!

A back of the envelope calculation shows 80k * 60 = 4,800,000. Even if each cell in that table is just one single token, you would still be over the regular input token limit.

So yes, it seems this job is beyond what the AI is supposed to be able to handle. If you can, try to significantly reduce the number of columns.

Hope that helps!

2 Likes

Welcome to the forum!

When you say “convert the file” - what, exactly, are you trying to do?
Are you trying to load it all into a single prompt?
Are you trying to convert each row into an embedding?
Does each row already have an embedding and you’re trying to find a nearest neighbor match?

Giving us details and showing us the code you’re trying to use may help us understand what you’re trying to do. The more info you give, the better our chances are of being able to help.

Welcome to the dev forum @Andeee

The current context length of Gemini models is around 1Million tokens.

If you’re trying to convert structured data from one format to another, I’d recommend having the model write code that does that.

Continuing the discussion from Is a file too big for the AI to convert:

Juu

1 Like

And I’ll repeat what I said back then.