You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Right now honegumi.html is ~2-3MB, and doubles with each new option. Easily could get into the 100's of MBs, which would not be good from a page loading perspective. Since honegumi creates effectively a large dictionary hardcoded within the html file, there needs to be some kind of alternative.
if you had a template that could render in both javascipt and python you wouldn't have to store all permutations of the template on the frontend, or need server side code to render and serve templates on the webpage.
It looks like jinja does not have javascript support, but handlebars and mustache have some libraries in python. Another option would be to use a subprocess in python to render templates using nodejs. This is probably better from a consistency perspective, it'll just complicate your environment setup a little
I considered various options in terms of templating, but initially I was primarily focused on Python-focused implementations (packages like Django and Mako often came up). In large part due to my lack of JS background, I tried to keep as much of it "Python-like" as possible. I thought this would also be important from a new contributor standpoint. See for example a tutorial I made: A Gentle Introduction to Jinja2. I think one of the things that tipped me towards Jinja2 is that Sphinx uses it internally - which I frequently use for documentation and others in the community would be familiar with.
I wasn't aware of handlebars. Thanks! Your comments are spurring me in a good direction considering alternatives. I think this ChatGPT transcript captured some good points and overall seemed reasonable. Pyodide might be a solution without changing much about the architecture (seems that Jinja2 is pure Python and therefore could be used with Pyodide).
What you described accurately reflects part of the scaling concern. I was curious and confirmed the current file size of honegumi.html (the interactive table that's included in the honegumi homepage) is ~3 MB and supports 11 binary options (one is hidden). Assuming a higher "ceiling" of 20 binary options, this would translate to ~1.5 GB. With a more modest 15 options: ~100 MB. This confirms the current architecture is nearing its limits from the webpage responsiveness perspective you brought up.
I wasn't aware of pyodide, and it looks like it's well maintained from the GH repo. ChatGPT suggested some things were well supported to me that had no commits in 3+ years on GH. My thought was the opposite, to render via Node.js from python subprocesses when testing. But pyodide sounds great and should be able to substitute for a backend in the browser.
I see the options as
Always generate code samples in Javascript. When you test them in python, generate them with a Node.js subprocess (https://docs.python.org/3/library/subprocess.html#using-the-subprocess-module).
(I guess you can't entirely switch the contents of honegumi's src and tests folders to Node.js because you still have to verify it runs in Ax, not that you'd want to rewrite it anyway)
Do the opposite, use something like pyodide to execute python via Javascipt in the browser.
Always generate code samples in actual python, which will require making a backend that you make requests from in readthedocs. And it will be a little slower than rendering templates in JS. And then there will be some complexities with remaining that infra.
Find some well supported templating library that is complex enough for your purposes and can run in both python and JS.
I don't understand what exactly is required for the sphinx integration though. Is sphinx using the rendered templates in any way right now? It seems like sphinx has some integration with readthedocs, but I'm not familiar with how that works. That might be more of a dealbreaker than I was aware of.
The text was updated successfully, but these errors were encountered:
Right now honegumi.html is ~2-3MB, and doubles with each new option. Easily could get into the 100's of MBs, which would not be good from a page loading perspective. Since honegumi creates effectively a large dictionary hardcoded within the html file, there needs to be some kind of alternative.
Taking from a conversation with @danielcohenlive:
@danielcohenlive
@sgbaird
@danielcohenlive
The text was updated successfully, but these errors were encountered: