We are trying to introduce Devcontainer and Github Codspaces support in the repository.
Please send us a feeback in the PR if you are a Codespaces beta tester or a Vscode user:
For me I do exploration and reporting in Juypter, then all the project work inside of PyCharm Pro. We run all our code inside of docker containers. For some client work we’ll be on AWS hosted notebooks to keep data inside their cloud.
Colab for experiment, sometimes local jupyter notebook and for final code use PyCharm. Also for training final model to Production I used GCP AI Platform.
I have an Anaconda environment that I keep up to date, the latest Community build of PyCharm, and I never used them because I end up doing all of my work on Colab.
Steps I follow to understand an issue:
(1) I (create gist/use existing gist) in colab to understand where the exact issue is.
(2) I come back to Visual Studio Code and write test for which build fails.
(3) I run bazel test and reproduce the issue
(4) I then add the fix in code wherever required and run bazel test again.
(5) Repeat step 4 until the bazel test passes.
(6) Push the code and raise PR.
Open for suggestions to improve this process if you have any in your mind.
How TF bazel build time is going to impact your contribution routine? Do you iterate on test locally or are you waiting for the TF team to manually kickoff CI tests on your commits push?
It’s quite funny but I analyse “an issue that I am going to work on for next few days” before going to sleep and just after waking up, I first run bazel build after syncing with the upstream and till the time I start my work (after 1-1.5 hrs after I wake up), the bazel build usually completes. So, during the day, it takes very less time to build because of caching and I usually don’t sync branch during day.
@Bhack , Any suggestion on improving faster bazel build?
I was trying to add a Github Action to continuously monitor and speedup the contributor build experience inside our official tensorflow/tensorflow:devel Docker image:
But we need to have a GCS cache to bootstrap the process. See
We need also to see if we want to wait for the WIP TF Dockerfiles refactoring in SIG Build
@Bhack , Is there a way I can configure github workflow on my fork which can run bazel test and files don’t get pushed in PR? My system is getting too slow when running bazel test and parallelly working on something else.
Yes you can copy on your repo something like this PR
But you need to bootstrap the remote cache (e.g. on gcs) with a first build on your machine with exactly the same env (e.g. Docker tensorflow/tensorflow:devel).
Cause, as you can see from that PR GitHub Action execution log, If you start the build from scratch, without a pre-populated remote cache, the GitHub Action will go in timeout as It will take too much compile time.
You can find more info on how to add the remote cache param (e.g. on GCS) at
My goal is to have this available automatically for all the contributors so that we could have a public read only remote cache that It is updated on every master commit/merge.