Hacker Newsnew | past | comments | ask | show | jobs | submit | blueimp's commentslogin

For local automated testing of mobile browsers on both iOS and Android you probably want to have a look at https://appium.io/, which uses the Webdriver protocol.

If you want to run the same tests against both mobile and desktop browsers, I recommend https://webdriver.io/, which uses Appium for mobile testing.

If you're interested in a Docker setup as a boilerplate to test against all major browsers (both mobile and desktop) with best practices for e.g. test video recordings, you might also wanna take a look at this repo of mine: https://github.com/blueimp/wdio


This is fantastic advice. Thank you so much for sharing!

Multiple upvotes if I could.


You're welcome! :)


Hey aboutruby, the way to use this project is the following:

1. Checkout the repo

2. Follow the README to setup the different browsers

3. Run the tests against the included sample app

4. Replace the sample app with your own app.

That last part depends very much on your own app. It's easiest if your own app can already be run via docker-compose, then you would simply replace the example container with your own container set. Otherwise you could point the baseUrl in the wdio.conf file to your host machine (e.g. using `host.docker.internal`) and run your app on your host.


Any Framework (including WebdriverIO) that uses the W3C Webdriver API or the older Selenium JSON Wire Protocol requires the appropriate driver for each browser.

In my opinion that's not a disadvantage, since the Webdriver API is a W3C standard and there are official drivers for each browser, implemented by the Browser vendors themselves (with the exception of the IEDriver, which is implemented by the Selenium project as far as I know).

Unless you use a built-in browser automation API (like Webdriver / Puppeteer), the only alternative is to inject the test code via JavaScript, which might pose problems with the Content-Security-Policy directive and often requires the tested site to run in an iframe, which poses additional problems.


While Docker definitely supports tagging versions, I've decided to not tag the example images for now.

The main reason for this is that it would be very difficult to properly express what a version stands for. e.g. there are multiple changing parts in the chromedriver image:

1. The Chrome version

2. The Chromedriver version (although this tied to the Chrome version)

3. The Docker image configuration

I will try to keep changes to the chromedriver/geckodriver image configurations to a minimum, but also won't guarantee so.

Another reason not to tag those images is that Chrome/Firefox use a rolling release system, making Chrome/Firefox versions less meaningful, as usually the latest version is the most important to test.

My recommendation for anyone using the those images for production CI infrastructure is to create a fork of the repos and your own Docker automated builds.


I'd say it's definitely hard to write cross-browser automated tests that are not flaky.

Some of that is due to unreliably implementations of the Webdriver API (or the previous Selenium JSON Wire protocol) in the different drivers.

Another part is that the API by nature is asynchronous, which might make it harder to reason about - although with WebdriverIO you can actually write your tests in a synchronous way, or using async/await with modern NodeJS versions.

Regarding the error you described - waiting for an element to appear when the element is already there: This might also be due to the element being outside of the viewport - e.g. the browser will not be able to click on it until you scroll there.


I think puppeteer is an interesting project, but right now it's Chrome-only and therefore pretty much useless for cross-browser testing. Even with support for Firefox, it would still lack support for Safari Desktop, Safari Mobile, Internet Explorer and Microsoft Edge. It also won't allow you to run the same tests against real devices.

I think Puppeteer is likely the superior choice for any browser-automation task apart from cross-browser testing. But if you want to make sure that your website is working for your users, using a Framework that uses the standardized W3C Webdriver API is the far better choice, unless you only want to support Chrome.


Hey Vinni,

you can definitely use this project and the containerized versions of Chrome/Firefox on CI - in fact that's its primary use case.

The way this project is setup is to use the chromedriver/geckodriver servers directly, without using the selenium Java server.

My recommendation for anyone using this in a production CI system is to fork the wdio, chromedriver, geckodriver and underlying basedriver repos and set up your own Docker automated build for them.

For a given GitLab repository, you would add this project as a folder and modify the provided docker-compose.yml to replace the example app with the application files from your repository.

Please let me know if I can help you with additional instructions.


Ah, but for me the main appeal would be to point it to an image, and then not having to maintain it myself - which is what I'd be doing with forking, and which is basically what I'm already doing, but with more work.

So for example, in my GitLab CI config, I've simply added the following lines to my job configuration:

  services:
    - name: selenium/standalone-firefox:3.13
      alias: selenium
I can then simply tell my wdio config that Selenium is running at `selenium` (i.e. `wdio --host=selenium`), and it will work.

However, the setup is somewhat brittle, doesn't work with the latest versions of Selenium (and I can't be arsed to fix it), and I think it still starts an X server. If, instead, I could simply point it to an image that runs headless Firefox, is maintained, and intended for use with wdio, then that would be an excellent time saver.

When I have to fork, however, the hurdle to start using this is a lot higher, and the savings of not using the Selenium Java server is not really worth the additional effort.


Well you can use the provided images without forking and they both support running Chrome/Firefox headless without X.

But since I'm building this in my personal time there's no professional support nor a guarantee that it won't break, so I would still recommend to fork it.


Please refer to the vulnerability documentation here to see if you are affected: https://github.com/blueimp/jQuery-File-Upload/blob/master/VU...


That was my thought as well.

I think one of the reasons nobody reported this earlier was that people simply assumed that .htaccess support was the default - Larry Cashdollar, the security researcher, also confirmed this: https://news.ycombinator.com/item?id=18271880


I do think that my project is responsible and not Apache, since I provided sample code that was not secure by default when used in a default Apache configuration as is.

However I wish Apache would have changed their default config in a way that would have signalled an error if an .htaccess file is present but not applied.

Something that HA user fulafel also pointed out here: https://news.ycombinator.com/item?id=18272407


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: