Hi everyone. In this video, I'm going to show you how you can use the Playright
MCP to manually test your site and create a test report or even a test
plan. No code. I'm literally going to show you how you can just use the
Playright MCP to just test your site without actually knowing how to code.
Yes. Um I don't even know if this is useful for you. This is just me playing
around with stuff. But I really want you to be curious. just watch this video and
then like open your mind and think, is this actually going to help me in my
workflow? Is it going to help me in my job? That's up to you. You're the expert
in your testing scenario, in your job. Um, just be mindful. Take it away and
think about can this work for you? I don't know. Do let me know in the
comments though. Right, let's dive in. So, I'm in VS Code and I have uh my
extensions here and I've got the MCP servers installed. So that's the
Playright one that I'm using and I'm going to go ahead and start the server.
If you haven't got Playright installed, you can click on this little world icon
and it will open up a website where you can just click on the install button.
Okay, so that's cool. Now let's go over this is a I'm in VS Code, but I have
absolutely no code. And sometimes a manual tester can come into uh something
like VS Code with no code experience and use it because it's actually really
useful for prompts and for running prompts and for using the chat mode. So
that's something as well that could be really cool for you to explore. So here
I've got a a prompt file that's about manually testing a manual test prompt.
Um so what I can do is this is basically just saying you know the the tools that
I wanted to use which is the playright MCP and how to how it should manually
test um a site. I'm going to go ahead and just run play. That's going to open
up a chat mode and um and basically it's going to ask me for a scenario. So, I'm
going to say use uh filtering. Oh, it's in caps. Use filtering
um filtering the podcast. So, I want it to test that scenario on deb.codes.
That's my website. I'm going to go ahead and press play and I'm going to go ahead
and let it do its thing. It's opening up the Playright MCP server. And it's
opened my website. It's navigated the URL. It's gone to the podcast section. I
can hold my hands up so you can see I'm doing nothing here. And uh it can see
the podcast. So, it's going to go ahead and start exploring that page. Now, I
didn't give it any other instructions. You saw I literally just said the
filtering. So, it's clicking on that filtering and it's testing out to see
what happens on that page. Does it do what it's meant to do, etc., etc. Now,
this is um really cool because I can um if I'm manually testing a site, I can
just go away and have coffee and let it do its thing. Um and then, you know, uh
I don't know, you can get other things done while this is being done. Um, it's
very useful. I think it's useful. It's really cool to watch. Anyway, that's at
least one thing. Uh, anyway, it's going ahead and it's doing its thing there.
Now, you might have noticed that I have down here um the playright tester. Um,
and I'll just quickly show you that while it's creating the man the file
that I asked it for. This is in thekitup folder under chat modes. I've got a
playright tester. Chat mode. All that is is just me giving it an instructions um
responsibilities of what it is. It's a it's a manual tester and I'm just giving
it the tools that it should use. That's all. Um it's really cool because it's
like you can kind of define its role, its core responsibilities. Um but you
can also just use agent mode if you don't have um this chat mode or don't
want to create one. All right, let's close that. Let's have a look at the
report that it has created. This is super exciting. Um let's pull this over
a little bit. It's closed the browser because I did ask it to close the
browser after it finished. And um it's uh created this and it's giving me a
beautiful summary there in the chat. Um but it's created this file, this test
report. So let me go and keep this so I can see it a little bit better. I've got
the test date, the URL, the scenario, the overview, the steps it performed,
the uh access the podcast section. It clicked the podcast link and navigation.
The results successfully navigated to the podcast page observation identified
available filters. It tested the playright filtering section. The next
filter, the all filter uh that it resets. Um it tested the filter test
results functionality working correctly. The URL changes the page titles the
episode counting. Um total episodes 29 playright episodes 13. Next 11 testing
episodes 13. All filters functional. Yes. Um, clear visual design. Thank you.
Intuitive navigation, consistent behavior, responsive updates. Um, page
updates immediately when filters are applied. Informative feedback. Episode
counts help users understand result quantities. Accessible labels. Yes, I'm
good. Um, areas of excellence. Performance filtering appears to be
instant. Yay. SEO friendly. Each filter has its own URL, making content
sharable. Breadcrumb context. Page titles clearly indicate the current
filter state and content quality. All episodes are properly tagged and
relevant to their categories. Um, so this is really really um
interesting. The issues found none. All tested functionality works as expected.
I could have thrown a bug in there just to kind of like see if it came up with
things. And some recommendations. I like this visual enhancement. Consider adding
a visual indicator like highlighting or a different color to show which filter
is currently active. Um, I thought that was already there. I'll have to keep a
check on that. Uh, filter combinations. Consider adding the ability to combine.
Whoa, that's really clever. combine multiple filters. Maybe we could have
like playride and AI. Oh, wow. H, I never thought of that. Episode count
preview. Consider showing episode counts on hover. That's interesting as well.
Um, so this is really cool. I've just got some new ideas now that I can
perform uh and and improve on. And the overall assessment is a pass all
functionality tested successfully with no issues found. Now, I just generated
that in just a couple of minutes. This is really cool. I learned something. I
can make some improvements and I didn't even write any code. Now, let's go ahead
and create a test plan. Someone asked me this like, "Have you created test
plans?" I've never created test plan in my my life. So, this is me creating a
test plan with no idea how to create a test plan.
Some of you out there create test plans and maybe this is going to be useful to
you. I don't know. Uh, don't shoot the messenger. Let me see what I can do with
this. So, I've got another prompt that I created. Um, and let's roll it on here.
test plan prompt. This is so exciting. I kind of really I I think this is so
cool. So, um let me see the test plan. Uh what I'm going to do is
um let's create a new a new one here. And let's
go ahead and if I run this, will it tell ask me what to do? I actually didn't ask
it. I didn't tell it. Create a compressive T. I'll help you create it.
It's going to start exploring. No, let don't need to do that. Okay, now it read
the manual test. All right. Okay, let's It's um it's gone ahead and found the
report. So, I probably should have created in the actual um prompt file to
ask it to give give me something and then I can, you know, but it's it found
it anyway because there's only one file in there, but I could improve that.
Nevertheless, I've asked it to create a test plan. It's gone ahead and it's uh
running through the um the report and it's basically um using the MCP server
to do all the actions based on the report. So imagine someone gave you a
report and said create a test plan based on this report. This is what I'm doing.
Again, I'm not writing any code. I'm literally just plain English just
writing stuff down or getting an AI to write things and running things based on
it. So now it's conducted a comprehensive manual testing of the
podcast filtering functionality and it's going to create a detailed test plan
based on its findings of the existing test report. Okay, this is exciting. Now
again I didn't tell it to close the browser. I should probably do that. Um
so that kind of things that I can definitely improve on. Um,
so while it's working there, let's go ahead and have a look at this uh test
plan prompt, which basically just says like create a compre comprehensive test
plan based on existing test reports or manual testing findings. Um, so you
could have pasted something in there. I had the report. So, so that works. And
then the analyze phase, document current functionality, the test plan structure.
Again, you could tell it which structure you want it depending on the kind of
structure you need. uh and the test case format. See, this is the format that
I've told it I think makes sense. Uh the risk assessment
um and some test plan output best practices
deliverables test plan document and I can you know
put in here uh close the browser tab after completing the test plan
creation. Great. So again, this is kind of just uh really easy to do with a
prompt. And again, you can say um oh, it's actually just gone and created
jumped. I love when it jumps. It's like jumped. Let me go back there quickly.
And I can just put in um ask the user for uh for a test report or what else
did it say? for test report uh or manual findings
or manual testing findings. So now it's going to ask and prompt for that which
is kind of nice because then I can kind of you know keep track of things. So
this is my test plan. Let's go ahead and press keep and let's look at this. Um
I've got my test date, the application, the feature under test, the test
environment, the browsers, uh the platforms. Um again I could remove that
if I wasn't interested in that. Um let's see what it did. What will be tested?
filter tag functionality. Um, URL navigation routing, page titles, this is
a scope. Uh, what will not be tested? External podcast links functionality,
audio player functionality, social media integration, search functionality not
present, backend API performance or content management system. That's pretty
cool to know what's not going to be tested. I like this test objectives. The
primary goal is verify all filter tags work correctly. Ensure the URL routting
functions properly for each filter. Validate episode counts. Confirm user
interface success criteria. All nine filter tags function correctly. URLs
updated. Episode counts match. No broken links or missing content. Um compliance
for accessibility and performance meets baseline requirements. Primary URL
filter URLs toll episodes available filters test case inventory priority
high. Like so this is really cool. I could go ahead and keep just reading
this out but like this has just been created in a matter of like what a
minute and a half. Um, and these are the test steps here. Like this is
the this is really amazing. Click on the all filter. Verify the URL changes.
Verify page title changes to podcast interviews. Verify episode count shows
29 episodes. Verify all podcast episodes are displayed. Uh, expected outcome. All
episodes are shown with current with correct count and page title. Pass if
all 29 episodes are visible and correctly labeled. Love this. Um,
so this is just testing different tags. Uh, nothing much interesting there. And
then we've got, so it's actually gone hide and tested all the tags. This is
quite nice. And the mentoring, uh, the filter reset
functionality. It started that pass if all 29% are visible and page elements
are correct. Direct URL navigation. Um,
verify page loads. Verify collect filters active. Direct URL navigation
works for all filter pages. Browser back and forward navigation. So, this has
really gone and created a hell of a lot of stuff here. Page title updates,
episode count display, content relevance validation, p visual filter indications,
and it's given a priority as well. Start on the main podcast page. Click on any
filter tag. Current filters visible. Passive users can easily identify which
filter is active. Um, filter list accessibility. All filter elements are
accessible. Interesting. Keyboard navigation. Whoa.
I never test keyboard navigation. This is really cool. Category accessibility.
Use keyboard only. No mouse. Oh my god. Mobile responsiveness. Test on mobile
device. Uh performance testing. Uh page load. Cross browser compatibility. Um
and edge case and error handling. Test with JavaScript disabled. Okay. So
that there's a couple of things that I don't want to test with JavaScript.
it disabled because that's just not what I do. Um, I don't want to c uh don't
want to do cross browser testing right now. Let's remove that. So, as you can
see, I can modify this test plan. Multitag episode verification. Um, we
can't we don't have that. So, I could remove that test case because that does
not exist. It's a functionality. Or I could leave it in there and it would
basically like not work. Um, actually, let's leave it in there and see what
happens. Um, feature podcast section and there's a lot of stuff in here. Um,
okay. This is cool. And let's just remove
let's remove Firefox Safari Edge latest versions. And let's just leave Chrome
for now. And let's not Well, we can leave Chrome. Mobile doesn't
really Let's remove mobile browsers for now. And let's
just do desktop. Okay. You could again you could do tablet mobile. It's fine.
Um this is really cool. Secondary testing.
Let's remove that. And again I could have actually just asked um
the AI to just remove these performance testing. There's a lot of stuff in here.
Uh now this is the thing. You can decide you can you can you can refine this
down. I want to go ahead and actually um see
if it can test this. Now, I'm going to have to close this browser manually
because it's opened. I didn't actually ask it to close it earlier. So, just
always close the browser just in case. I'm going to open a new tab here. Uh
it's got this test plan in this context. So, it's it's here so you can see it can
use it. And I'm just going to go and say manually
test the um
or run the test plan. Well that we say run the test plan
to check everything works as expected. This is really interesting. Right.
Again, I have no idea if this is going to be useful to you. Uh I'm just giving
you ideas that you can walk away with, be curious about, and um is this going
to help you in your job? Is this going to make you more efficient? Is it going
to help you get things done faster? Is it going to give you more confidence in
your manual testing approach? If it does, then go ahead and play around with
this and give it a try. I think it's really cool. I think it might save you
time. Could be completely wrong, but I'm definitely going to use it as a um a way
for me to check my website. Um no code involved. This is literally just running
pure natural language to test my website. And I think this is really
cool. I wonder if it's going to pick up on anything. Um, I should probably again
break something and let it kind of see what report that it, you know, it comes
back with. But I can I like the fact that I can visually just watch what it's
doing instead of just reading through a whole report. I can actually say, "Yeah,
it actually did do that. I did click that. I did see the Devrell episodes
there." When I clicked on Devrell, it's going to go back and forward. There's a
lag in the page title updates during navigation. That's something I didn't
know about. Um it's a minor issue with the core functionality works. Um
episode counts as one episode for for mentoring. It's actually clicked on two
um things there. Devrell and mentoring. That's interesting. So it's now going to
create the compreh comprehensive test report based on the testing I've
completed. So again, I've gone from uh a test report to a man to a test plan and
then the test plan to a test report. I don't know what I'm doing here, but you
can go ahead and take this away and uh play around with it and see does this
make sense to you? Is this useful? There's so many things you can do with
the Playright MCP that like it's actually just mind-blowing. It's
mind-blowing what it can do in such a small amount of time. I'd love you to
take this and um and work on your website and work on on something bigger
than just my personal website. and uh let me know what it does. Let me know if
this is useful, if it's helpful. Uh send me a comment in the chat. I'd love to
hear from you. Um thanks for watching everyone. This is just creating a report
here. We could go ahead and read the report, but it's pretty much here tell
told me everything. And um I can go ahead and um yeah, I'm going to go ahead
and test more things on my site and see what it finds. Thanks for watching
everyone. Have a great day. Bye.