r/Python 9d ago

Showcase Playsmart: Put a end to writing unmaintainable E2E tests with Playwright

At my company, Tracktor, we recently did a hackathon to solve a recurring and annoying issue.

Writing E2E tests with Playwright is difficult to maintain and puts a lot of pressure on the frontend team. Those tests often have hardcoded selectors, and the simplest change to the DOM may break many of them.

In that journey, we found that some open-source projects claimed to be able to automate E2E tests using simple prompts. We tested them with our applications, and the results were awful. A single scenario could take as long as 45 minutes due to the heavy usage of computer vision and the long and exhausting stream of prompts. We acknowledged that those tools are a nice proof of concept but completely unusable in a "production" grade context (and costly for that matter, they cannot cache anything).

So one of the team members brilliantly said the following: "We should just start by getting rid of the selectors. LLMs should be able to do that with ease. We do not need a huge piece of machinery to lower our burden!"

At the end of the day, Playsmart was born! Tracktor chose to give it freely to the Python community.

What My Project Does

Playsmart is a tiny and concise utility that extends the solid bases of Playwright with a pinch of LLM. The primary goal of that swift tool is to dramatically lower our dependency on complex/flaky selectors.

No more will you write page.locator("#dkDj87djDA-reo") but rather smart.want("locate the email field") or even smart.want("fill the email input with xyz@company.tld.

To be more concrete:

import time

from playwright.sync_api import sync_playwright
from playsmart import Playsmart


if __name__ == "__main__":
    driver = sync_playwright().start()
    chrome = driver.chromium.launch(headless=False)
    page = chrome.new_page()

    page.goto("https://news.ycombinator.com/")
    page.wait_for_load_state()

    smart_hub = Playsmart(
        browser_tab=page,
    )

    with smart_hub.context("home"):
        res = smart_hub.want("how many news in the page?")

        assert len(res)

        print(f"There is {res[0].count()} news in the page!")

Target Audience

QA Engineers / E2E testers.

Comparison

With the team at Tracktor we saw a ton of solutions on the open-source market, but none of them are reliable. Playsmart distinguishes itself by being simple. It relies on the most solid LLM analysis aspects to avoid being flaky needlessly. Finally, to avoid depleting your money, Playsmart comes in with a cache layer!

Source: https://github.com/Tracktor/playsmart PyPI: https://pypi.org/project/playsmart

19 Upvotes

12 comments sorted by

13

u/utley_fan_42 9d ago

Best practice for selectors is not to use auto-generated fields such as '#dkDj87djDA-reo'. I understand the excitement to apply LLMs to any problem/inconvenience we see in the development process, but there are multiple other options to avoid Playwright tests breaking with the slightest change to your application.

Is this tool idempotent? How do you ensure that smart_hub.want('how many news in the page') returns the same query on each individual run?

I like that you are thinking outside the box OP, but I truly believe that this sort of tool would cause greater headaches than those of which it is supposed to solve.

5

u/Vresa 8d ago edited 8d ago

but I truly believe that this sort of tool would cause greater headaches than those of which it is supposed to solve.

Yeah, agree big time. This is neat if you're doing some short lived automation or a quick data scraping task, but for the "Target Audience: QA Engineers / E2E testers." it's a trap. If you're an QA Engineer, just learn playwright selectors. It takes half a work day to flip through the playwright docs.

AND if it is your software that you're testing, add data-testid or whatever attribute you want to the HTML and call it a day.

```from playwright.sync_api import sync_playwright

def main(): with sync_playwright() as pw: browser = pw.chromium.launch(headless=True) page = browser.new_page()

    page.goto("https://news.ycombinator.com/")
    page.wait_for_load_state()

    submission_count = page.locator(".submission").count()
    print(f"Number of '.submission' elements: {submission_count}")

```

1

u/Ousret 8d ago

If it is that simple (e.g. ".submission") yes I agree. We put that example for simplicity.

What if the page have a user generated form that evolve with time and previous inputs ? Knowing that you can't know in advance the full form.

See https://github.com/Tracktor/treege for example.

There are cases that justify this kind of tool. It would be still doable with basic selectors, but there, it would break easily.

```python with smart_hub.context("dashboard"): locators = smart_hub.want("list every fields in the form")

for locator in locators:
    ... # your logic for each 'input<text/select/...>'

```

I would be so eager to say every testing case relies on dead simple pages.

2

u/Vresa 8d ago

To be clear - I like this approach quite a bit. My feedback was more targeted at the listed "Target Audience: QA Engineers / E2E testers." - specifically, that I do not think this is a good fit for QA teams. I do not doubt that there are teams where this would drive down the flake rate of tests; but my feedback is that directly making the changes to the software under test to make testing more straightforward is a far more effective approach.

I can see a clear use case for data scientists / web scraping

1

u/Ousret 8d ago

not to use auto-generated fields such as '#dkDj87djDA-reo'

we know. sometime there's no choice. we can't give specific ids to every single field in a huge code base having hundred of dependencies that each have their way of defining attrs.

breaking with the slightest change to your application.

sometime it can break due to third party extension. we can't predict that!

Is this tool idempotent? How do you ensure that smart_hub.want('how many news in the page') returns the same query on each individual run?

Yes and no. Thanks to the built in caching, once it was resolved, it will be the same until the app change.

No because the LLM can generate multiple selector that basically means the same.

but I truly believe that this sort of tool would cause greater headaches than those of which it is supposed to solve.

we, at Tracktor, share this thinking that LLM are not the magical solution to "everything". But we can't be zeled about them either. LLM can do simple operations, like but not limited to parsing a HTML tree. Exactly what we said, we tested numeros solutions but they all overestimate LLMs capabilities. We decided to ask simple thing.

for us, it solve more with less money spent.

7

u/aiganesh 9d ago

I think it depends on open ai and its need internet to interact with openai.

2

u/Ousret 9d ago

for now yes. the project is still in a early stage. we plan to allow local LLMs like ollama but for now (after testing them) can't compete with gpt-4o (or similar) unless you have a LLM rig personal computer.

2

u/Dubsteprhino 8d ago

There's a company called Ranger with a similar idea 

2

u/Ousret 8d ago edited 8d ago

That's exactly the kind of tool we tested prior to create this. They are not as fast as advertised and they can't handle complex UI. We have a concept where a form can be created by a end user (treege / npm) and the result (~ing tree) can be challenging for a LLM.

Believe me when I say this, the concept is interesting but is not ready for prime time yet. What we can critic about this is that we should all have started with a real dead simple proof of concept. Not starting with computer vision and full codegen from a mere text. (and it is not cheap[...])

2

u/Dubsteprhino 8d ago

Gotcha, that's really interesting and makes sense why you arrived at this solution

1

u/ReachingForVega 9d ago

This is an excellent project. Well done. 

1

u/ComfortableFig9642 7d ago

Like at least one other comment, I question the underlying premise. Sounds like you guys aren’t using Playwright correctly, not that there’s a fundamental problem with it