I am writing on a testframework where the report should include the webdriver version of the test run. When using selenium there is the getEval("Selenium.version") method. But I find no way to read the version when using webdriver. Does anyone know a solution?
How can I get the WebDriver version during Testrun?
How to launch a chrome browser window inside my own windows desktop application's UI?
I'm writing a windows desktop application and within my app I'm using selenium
to launch chrome browser.But it launches the browser outside my app.
Is there any way that I can launch the browser inside my own application's UI ?Something similar to Custom Chrome Tabs
for android apps.
I'm using python, qt and pyqt5 .I'm really new to this. Any help would be appreciated. Thanks.
Website "gosugamers.net" is detecting Selenium
When you open website gosugamers.net, it shows the following,
After 5 seconds it automatically directs to the main page. But I tried to open it using selenium, it got stuck at the above screenshot.
Chrome Driver Version: 86
Things I tried so far,
Edited ChromeDriver.exe using VIM and changed the value of this
var key = '$cdc_asdjflasutopfhvcZLmcfl_'; Original Value
var key = '$abc_asdjflasutopfhvcZLmcfl_'; Updated Value
options = webdriver.ChromeOptions()options.add_experimental_option("excludeSwitches", ["enable-automation"])options.add_experimental_option('useAutomationExtension', False)driver = webdriver.Chrome(options=options)
options = webdriver.ChromeOptions() options.add_argument("start-maximized")options.add_experimental_option("excludeSwitches", ["enable-automation"])options.add_experimental_option('useAutomationExtension', False)driver = webdriver.Chrome(options=options)driver.execute_script("Object.defineProperty(navigator, 'webdriver', {get: () => undefined})")driver.execute_cdp_cmd('Network.setUserAgentOverride', {"userAgent": 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.53 Safari/537.36'})print(driver.execute_script("return navigator.userAgent;"))
Getting same results by using all above approaches.Can anyone help me with this?Thanks
Selenium finding ip address of chrome
I'm using a vpn and it only changes ip address of chrome. Sometimes my vpn connection goes down and I want to know when it happens and I want to be able to know that by comparing my local ip address and chromes ip address so that if they are equal I will know that my vpn connection is down.
Is there any possibility to see my chrome's ip address that selenium makes (using seleniums any function or without using doesn't matter)? I need to get that ip not just for only this case, that's why I don't use try catch.
Is there any way to find my chrome's ip address that selenium makes with VB 6.0 and with Python?
Having trouble referencing to a certain element on page with Selenium
I am having a terribly hard time referencing to a certain "next page" button on a website that I am trying to scrape links from [https://www.sreality.cz/adresar?strana=2]. If you scroll down you can see a red right arrow button that you can click to go to the next page and so the website load new dynamic content. Every approach seems to report the same exact error and I don't know how am I supposed to point to the element without running into it.
This is the code that I currently have :
from selenium import webdriverchromedriver_path = "/home/user/Dokumenty/iCloud/RealityScraper/chromedriver"driver = webdriver.Chrome(chromedriver_path)print("WebDriver Successfully Initialized")driver.get("https://www.sreality.cz/adresar?strana=2")links = driver.find_elements_by_css_selector("h2.title a")nextPage = driver.find_element_by_css_selector("li.paging-item a.btn-paging-pn.icof.icon-arr-right.paging-next")for link in links: print(link.get_attribute("href")) nextPage.click()
The "nextPage" variable is holding a supposed value to be clicked on once the "links" variable search finishes scraping all the links from the company titles. However when I run this code I get an error :
selenium.common.exceptions.StaleElementReferenceException: Message:stale element reference: element is not attached to the page document
I have been searching for various fixes online but none of them seemed to resolve the issue. I think that the issue at this point is not caused by the element not loading quickly enough but rather Selenium having trouble finding the element because of wrong reference.
Because of this I have tried using XPath to accurately point to the actual element and so I changed the "nextPage" variable to :
nextPage = driver.find_element_by_xpath("""/html/body/div[2]/div[1]/div[2]/div[2]/div[4]/div/div/div/div[2]/div/div[2]/ul[1]/li[12]/a""")
Which returns exactly the same error as stated above. I have been trying to find a solution to this for hours now and I can't understand where the issue lies. I would be grateful if anyone could explain to me what am I doing wrong. Thanks to anyone.
upload a file using selenium
I have found the upload button and clicked on it but now I cant find a way to send the path of file which should be uploaded. here is the html code:
<div style="overflow:hidden;"><input id="file" type="file" name="File" size="42" style="width:300px;font-family:Arial,sans-serif;font-size:8pt;"></div>
it would be appreciated if somebody help me.
I am working with selenium in python and try to automating google sign-in in chrome but I am in a problem
Class has been compiled by a more recent version of the Java Environment
While running selenium script, I am getting the following error message in Eclipse console:
Class has been compiled by a more recent version of the Java Environment (class file version 53.0), this version of the Java Runtime only recognizes class file versions up to 52.0.
- Java Version:
8
- IDE:
Eclipse Oxygen
- Firefox Version:
46
Expanding and Collapsing using selenium through crawling in java with [+] and [-]
I am trying to do expand and collapse with [+] and [-] in a website using selenium. My html code is
<div onclick="abc_Click(this);" class="liCollapsed">
Here abc_Click(this) is the Onclick event which i am trying to click. The code I am using is
By.xpath("[@onclick='abc_Click(this)']")).click();
But the [+] is not clicked. Please help me on this.
why selenium comes like this element not interactable when scrapping youtube
i tried the selenium to automate Repeatedly searching thing"Kerala blasters"
With this code
from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.keys import Keys from selenium.webdriver.common.action_chains import ActionChains import time PATH ='/home/hp/chromedriver' driver=webdriver.Chrome(PATH) driver.get("https://www.youtube.com/") search = driver.find_element_by_id("search") time.sleep(5) ActionChains(driver).move_to_element(search).click(search) search.send_keys("kerala blasters") search.send_keys(Keys.RETURN) driver.quit()
error is this
selenium.common.exceptions.ElementNotInteractableException: Message: element not interactable
Authorization in web.whatsapp using python requests
I wrote a script in selenium that loads the web.whatsapp page and finds the QR code for authorization in the source code.The script sends a QR code to my telegram and thus I can authorize someone's account on my home computer remotely.
from time import sleepfrom qrcode import makefrom selenium import webdriverdriver = webdriver.Chrome()driver.get("https://web.whatsapp.com/")sleep(3) #wait for load the QRvalue = driver.find_element_by_css_selector("div[data-ref]").get_attribute("data-ref") #find for a value encoded in QRprint(value)make(value).save("qr.png") #make my qr using the previous valuesend_to_tg("qr.png") # send qr sleep(17) # wait for authtry: side = driver.find_element_by_id("side") # if "side" exist, authorization was successful cookies = driver.get_cookies() print(cookies)except Exception: print("Noup !")
It works, but i can't keep the browser page open all the time, so i tried to save cookies, but selenium does not detect any cookies at all, if you call "driver.get_cookies()" after authorization it returns an empty array "[]".
I check source code the web.whatsapp page after auth and found cookies there, see the picture.
So I have one question that hides in itself two ways to solve the problem, the first is: how can I save cookies web.whatsapp and then continue to use the authorized account in other browsers ?
And the second: can i start a session in the requests, so that after authorization, I can also save cookies and use them in another browser?
How can I get the value of zoom settings with selenium in python?
I want to get the value of zoom of chrome with selenium and python
Using Scrapy to fill text area for logging in
I'm trying to scrape the player info from Transfer Market(https://www.transfermarkt.com/spieler-statistik/wertvollstespieler/marktwertetop), and I successfully got the data I want.
But when I tried to scrape "My Player Watchlist"(https://www.transfermarkt.com/darrellcity/spielerWatchlist/meintm/1019535) from the website, which requires logging in, I had no idea how to fill in the text fields using scrapy.I tried with Scrapy.FormRequests, but found that the website doesnt use POST method to log in. Also, I tried to use selenium to finish logging in before scraping with Scrapy, but it seemed not working.
I know how to do it in Selenium, but I want to use scrapy instead to increase the speed of scraping and updating data.
Below is my code for the player list that doesnt requires logging in
import timeimport scrapyfrom scrapy.http import FormRequestclass TMSpider(scrapy.Spider): name = 'scrapyfirst' allowed_domain = ["transfermarkt.com"] def start_requests(self): urls = [f"https://www.transfermarkt.com/spieler-statistik/wertvollstespieler/marktwertetop?page={i}" for i in range(1,11)] for url in urls: yield scrapy.Request(url,callback=self.parse) def parse(self,response): item = ProjItem() item['name'] = response.xpath('//td/a[not(contains(text(),"\r\n"))]/text()').getall() item['value'] = response.xpath('//td/a[not(contains(text(),"\r\n"))]/text()').getall() yield item```
Get all elements with absolute position selenium
Is there a way in selenium (java) to get all the elements on a page with a
position: absolute;
in the CSS of the page?
How to read all the error messages in login page(Selenium)
Want to validate all the error messages in the login page. Providing data through a @dataprovider(collects from Excel) to a method and would like to read all the error messages by passing incorrect username and password.
Error message such as
1.Attempt 1: Incorrect Username and password
2.Attempt 2: Incorrect Username and password
3.Attempt 3: Incorrect Username and password
- Try after 30 mins
- Enter Username (blank)6.etc.
Do I run a loop after counting the data provided from @dataprovider? If so, how to count the data that is provided? Please suggest how to handle this?
Python Selenium TimeoutException
Is it possible to extend the default TimeoutException of Selenium?
My script is crashing on page loads over 300 seconds. My script is triggering a php script on my backend. If the php script runs for less than 300 seconds, everything is good, but on times where the script runs for longer, selenium throws the TimeoueException error.
TimeoutException: Message: timeout: Timed out receiving message from renderer: 300.000
Is there a way to tell Selenium to just wait until the script is done running?
I have tried expected_conditions and it does not help.
BeautifulSoup, how can I get texts without class identifier?
While crawling the website, there is no class name of some text I want to pull or any id style to separate the part that contains that text. In the selector path I used with soup.select it doesn't work for continuous operations. As an example, I want to take the data below, but I don't know how to do it.
Python Selenium Wait for User Interaction to Continue
I have a python script that basically goes on a page and automatically submits fields for me.
But there are some cases that I might have to interfere and do some manual changes but I don't want to make the program sleep()
for let's say 5mins in case the user has to interfere because in case there is no need for the user to interfere he/she will just stand there and wait for 5 mins :/
So basically what I want is for selenium to wait for the user to press a button with an XPATH of lets say XPATH1
before it proceeds with the rest of the code,I could also do the same with a Key combination, what I mean by that is when the user checks that everything is ok he could press ENTER
and that would trigger selenium to continue
# Pseudo codewaitForUsr= WebDriverWait.until(User_click_Button_with_XPATH("XPATH"))waitForUsr= WebDriverWait.until(Keys.ENTER_is_pressed)
Thank you for your time!I hope you can help me out.
How to pass arguments to Selenium test functions in Pytest?
I wamt to make my tests more flexible. For example I have a _test_login_ that could be reused with multiple different login credentials. How do I pass them as arguments instead of hard-coding them?
What I have right now:
from selenium import webdriverimport pytestdef test_login(): driver = webdriver.Chrome() driver.get("https://semantic-ui.com/examples/login.html") emailBox = driver.find_element_by_name("email") pwBox = driver.find_element_by_name("password") emailBox.send_keys("someLogin") pwBox.send_keys("somePW")
How can I replace the string literals in the last two lines with something more flexible?
I want to have something like this:
from selenium import webdriverimport pytestdef test_login(specifiedEmail, specifiedPW): driver = webdriver.Chrome() driver.get("https://semantic-ui.com/examples/login.html") emailBox = driver.find_element_by_name("email") pwBox = driver.find_element_by_name("password") emailBox.send_keys(specifiedEmail) pwBox.send_keys(specificedPW)
Could you explain how to do this by calling the script as:
pytest main.py *specifiedEmail* *specifiedPW*
Did anyone use multilogin app's api with python? [closed]
Can you share your experience or maybe python code samples if you have any implementation of multilogin local api and version 2 api to create, delete and update multilogin browser profile dynamically from python script?
I got no clues of their swagger v2 api...
Thanks in advance for your help.