If you saved that value to a text file with a .csv extension, you’d have your original file back.
Unfortunately, it looks like the dev tools viewer in Firefox truncates the values before parsing the JSON so it breaks on files as large as your CROPS.csv;
It’s an empty sandbox for the moment, as i don’t have more than this 1/4hr to play right now, but- thinking this may enable one of you dev dudes to create something wonderful -just thought i should nail it up here straightaway.
Let a thousand flowers bloom, in form of yourname.github.io/farmOS notebooks!
PS: “If it sounds too good to be true,” as the saying goes, “it probably is” -certainly the case here.
After a whole lot of diddling around in Github, i must admit defeat (for today at least): can’t find any way to upload files into the above-linked repo, in such a way that they will actually BE there for somebody coming in with another browser.
So all this affords me is another place to access a JupyterLite runtime -not a bad thing, but then it’s not what i was hoping it might provide. It is just me, i wonder, or is the facility indeed this limited?
Although I found that JupyterLite does not automatically pick up changes to that directory - and I needed to completely clear the browser cache for the JupyterLite site for files to appear.
Dooh! and here i was, trying everything i could think of in the “gh-pages” branch, thinking that’s the one that is made for publishing to username.github.io !
Yes! After many cache-flushes, eventual switch to a different browser, plus significant time elapsed, i now i see that my files uploaded to ‘main’ branch are indeed there (yay!).
NB: On return to my default browser (Firefox), it took me some time to make yesterday’s version of the .ipynb go away; it’s not as simple as just flushing the browser cache, but you have to go into the “Clear Recent History” dialog and tick ALL the boxes, including Data: Site Settings and Offine Website Data. Bit of a pain… Which is why i am glad to hear this:
Oh good point @walt ! You had the right idea, actually, and I had to double-check but I see how it works now… it looks like the repo has a GitHub Actions workflow that automatically publishes any changes from the main branch to the gh-pages branch for you!
That’s handy!
it took me some time to make yesterday’s version of the .ipynb go away; it’s not as simple as just flushing the browser cache, but you have to go into the “Clear Recent History” dialog and tick ALL the boxes, including Data: Site Settings and Offine Website Data. Bit of a pain…
Yea agreed - it would be great if we could make this less painful by connecting directly to Drupal’s file storage mechanisms. It will take some thought though - especially on the browser storage clearing question. It seems that JupyterLite has some hard assumptions there… although this upstream thread @Symbioquine found gives me some hope that they are thinking about it: Normalize and make Content frontends and backends extensible · Issue #315 · jupyterlite/jupyterlite · GitHub
I tested the “Animal CSV import” example, but I got an error in block [9] in line:
resp = await pyfetch(location.origin + ‘/api/taxonomy_term/animal_type’, method=‘POST’,
An HTTP 500 error is supposed to mean something when wrong on the server (farmOS in this case) side. Check the farmOS logs at /admin/reports/dblog on your farmOS instance. Hopefully that will give us a clue what went wrong.
Symfony\Component\Routing\Exception\MethodNotAllowedException: in Drupal\Core\Routing\Router->matchRequest() (line 134 of /opt/drupal/web/core/lib/Drupal/Core/Routing/Router.php).
Yes, I should have permission to create animal_type according to the JSON:API setting: “Accept all JSON:API create, read, update, and delete operations.”
I added to the farmOS log above the Hostname which I forgot to write. Shouldn’t it be the localhost?
If you’re comfortable doing so, can you open your browser dev tools’ network tab and right click on that request which is failing with the 500 error, then click “copy as cURL”. The resulting clipboard contents can be pasted into a terminal (mac or linux) and " -i" can be added after the “curl” (e.g. “curl -i …”) part of the command to see the response headers along with the response body. That may give us a clue why it is failing…
I haven’t been able reproduce this problem, but I’m happy to help troubleshoot further if you feel like opening another thread @Farmy. I’m guessing this doesn’t really have anything to do with JupyterLite per se.
Yeah, it’s hard to tell how far afield to let the topic get. I think it can be useful to engage a bit on issues here, but probably not to go super deep on them…
Please help make examples for importing harvest log from csv files. I don’t understand what to do, but I successfully import animals by following the examples. Thanks