Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Testing questionary flows #35

Open
xmatthias opened this issue Jan 29, 2020 · 9 comments
Open

Testing questionary flows #35

xmatthias opened this issue Jan 29, 2020 · 9 comments
Labels
Enhancement New feature or request

Comments

@xmatthias
Copy link
Contributor

It would be great if questionary would have an easy way to test input flows

For example (reusing my example flow from #34):

from questionary import Separator, test_prompt 
questions = [
        {
            "type": "confirm",
            "name": "conditional_step",
            "message": "Would you like the next question?",
            "default": True,
        },
       {
            "type": "text",
            "name": "next_question",
            "message": "Name this library?",
            "when": lambda x: x['conditional_step'],
            "validate": lambda val: val == "questionary"
        },
       {
            "type": "select",
            "name": "second_question",
            "message": "Select item",
            "choices": [
                "item1",
                "item2",
                Separator(),
                "other",
            ],
        },
        {
            "type": "text",
            "name": "second_question",
            "message": "Insert free text",
            "when": lambda x: x["second_question"] == "other"
        },
]
inputs = ["Yes", "questionary", "other", "free text something"]
vals = test_prompt(questions, inputs)

assert vals['second_question'] == "free text something"
. . .

Now by calling test_prompt() with an input string (or List - which is probably easier to compile) we can run through the whole workflow, and verify that all keys are populated as expected.
This would allow proper CI for more complex flows ... which base one input on top of the other as in the question-flow above.
I suspect it would be possible by mocking some internals of questionary - but i see this as a dangerous approach as every minor change in these mocked functions would probably lead to an error in my tests.

Most of the code / logic should already be available as part of the tests for questionary - however that's not available when installing from pypi...

@tmbo
Copy link
Owner

tmbo commented Jan 30, 2020

That is a really good idea, and actually something we have been struggling with in https://github.com/rasahq/rasa as well.

I need to digg a little deeper, but ideally, there should be a way to mock the actual IO, but provided by questionary (as you said, I don't think it is save for the user to mock these kind of things, but it would be fine to provide & test it as part of the library).

Could be something like

from questionary.test import mocked_input
# ...

def test_input_questions():
  with mocked_input({"conditional_step": "Yes", "next_question": "questionary"}):
    vals = prompt(questions)
  # ...

@tmbo tmbo added the Enhancement New feature or request label Jan 30, 2020
@tmbo tmbo changed the title [feature request] Testing questionary flows Testing questionary flows Jan 30, 2020
@yajo
Copy link
Contributor

yajo commented Sep 25, 2020

I can see that the test suite has some interesting utils to test itself:

class KeyInputs(object):
DOWN = "\x1b[B"
UP = "\x1b[A"
LEFT = "\x1b[D"
RIGHT = "\x1b[C"
ENTER = "\x0a"
ESCAPE = "\x1b"
CONTROLC = "\x03"
BACK = "\x7f"
SPACE = " "
TAB = "\x09"
def feed_cli_with_input(_type, message, texts, sleep_time=1, **kwargs):
"""
Create a Prompt, feed it with the given user input and return the CLI
object.
You an provide multiple texts, the feeder will async sleep for `sleep_time`
This returns a (result, Application) tuple.
"""
if not isinstance(texts, list):
texts = [texts]
inp = create_pipe_input()
try:
prompter = prompt_by_name(_type)
application = prompter(message, input=inp, output=DummyOutput(), **kwargs)
if is_prompt_toolkit_3():
loop = asyncio.new_event_loop()
future_result = loop.create_task(application.unsafe_ask_async())
for i, text in enumerate(texts):
# noinspection PyUnresolvedReferences
inp.send_text(text)
if i != len(texts) - 1:
loop.run_until_complete(asyncio.sleep(sleep_time))
result = loop.run_until_complete(future_result)
else:
for text in texts:
inp.send_text(text)
result = application.unsafe_ask()
return result, application
finally:
inp.close()
def patched_prompt(questions, text, **kwargs):
"""Create a prompt where the input and output are predefined."""
inp = create_pipe_input()
try:
# noinspection PyUnresolvedReferences
inp.send_text(text)
result = prompt(questions, input=inp, output=DummyOutput(), **kwargs)
return result
finally:
inp.close()

How about bundling all those utils in questionary and documenting them a little bit, so downstream projects can test too?

Notice that some good answer to this might land in prompt-toolkit/python-prompt-toolkit#1243.

@yajo
Copy link
Contributor

yajo commented Oct 11, 2020

I've been using pexpect successfully in copier-org/copier#260 to test workflow. I recommend it, although it won't work on Windows as explained in prompt-toolkit/python-prompt-toolkit#1243 (comment).

@lyz-code
Copy link

I've gathered a small example using @yajo 's method here in case anyone is as lost as I was some hours ago.

@SteffenBrinckmann
Copy link

The arrow-keys are such a great feature of questionary. Did anybody get the arrow-keys working for their testing environment? And if yes, could you post the code?

@kiancross
Copy link
Collaborator

The arrow-keys are such a great feature of questionary. Did anybody get the arrow-keys working for their testing environment? And if yes, could you post the code?

Some of our own tests use the arrow keys. You should be able to get this working in your own code by copying tests/utils.py. An example using arrow keys is here:

text = KeyInputs.DOWN + KeyInputs.DOWN + KeyInputs.ENTER + "\r"

As @yajo has suggested, we should expose these in the Questionary API so that people can more easily test their projects!

@pmeier
Copy link
Contributor

pmeier commented Oct 18, 2023

Any update on this? Currently we are vendoring the test/utils.py, but it would be much nicer to have them in questionary.test or the like.

@kiancross
Copy link
Collaborator

kiancross commented Dec 29, 2023

@pmeier No update at the moment, but happy to provide feedback on any PRs which attempt to implement this.

@pmeier
Copy link
Contributor

pmeier commented Jan 2, 2024

I'll clean up our test code to see what exactly we need and send a PR afterwards.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

7 participants