Greetings,
I need some help, how to identify relationships and develop input pipelines for data with 1000’s of dimensions. There are many different articles across the web , but they seem very confusing. What would we do in a case if we have data that 50,000 dimensions let’s say containing signed floating values, now all variables are equally important as they provide information something distinct, so we need to develop the model to learn about all those variable, which make most of the dimensionality reduction method seemed flawed to me.
So in case like this what approach should be followed, It would be grateful if some example is provided with TensorFlow.
Thanks