Skip to main content

Form Filling Automation with Python

 

Overview:

This script demonstrates a basic automation flow for filling out web forms using data from an Excel file. It handles finding the next available row, mapping form fields, submitting the form, and updating the Excel sheet. The try-except block ensures error handling, and the finally block closes the WebDriver instance.

Steps:

Importing Libraries:

The code begins by importing necessary libraries: webdriver for browser automation with Selenium, openpyxl for working with Excel files, and time for adding delays.



get_data_from_excel Function:

This function reads data from an Excel file and initializes an empty dictionary data.


Loop to Find Next Available Row:

The function uses a while loop to find the next available row where the status is not "Done" in column F. If the status is not "Done," the loop breaks, and it moves to the next steps.


Reading Data from Excel:

If the status is not "Done," the function extracts column names and values from the specified row and creates a dictionary (data) with column names as keys and corresponding values. The function returns both the data and the updated row number.



fill_form_with_excel_data Function:

This function fills out a web form using data from the Excel file.
It opens a Chrome browser, navigates to a web page, and sleeps for 2 seconds.



Filling Form and Updating Excel:

The function calls get_data_from_excel to get employee data and the updated row number.
If employee data is available, it maps form field names to Excel column names and fills in the form fields using Selenium. After submitting the form, it updates the status in the Excel sheet to "Done" for the processed row.



Main Execution:

The script sets an initial row_number and calls the fill_form_with_excel_data function.




Note: I created a simple form using Flask framework, you can create your own too. You can find the complete python file here: https://github.com/Aaminah27/Python-Scripts/blob/main/form-automation.py

Comments

Popular posts from this blog

Web Scraping Book Information from a Website using BeautifulSoup and Pandas

Web scraping is a powerful technique to extract data from websites, and Python offers several libraries for this purpose. In this tutorial, we'll walk through a Python script that uses BeautifulSoup and Pandas to scrape book information from the 'https://books.toscrape.com/' website. Step 1: Importing Libraries We begin by importing the necessary libraries. BeautifulSoup is used for parsing HTML content, requests for making HTTP requests to the website, and Pandas for creating and manipulating data frames. Step 2: Fetching Web Page Content Next, we specify the URL of the website and use the requests library to fetch the HTML content of the page. We then decode the content to remove any encoding issues. Step 3: Extracting Book Information The book information is contained within <ol> (ordered list) tags on the webpage. We use BeautifulSoup to find all the <ol> tags. Step 4: Creating a DataFrame We define the column names for our data frame and create an empty dat...

Creating a simple form with Flask-wtf

Tools : Visual Studio Technology : Python, Flask Assumptions : You already know the basics of flask, html and python.  Steps:  First of all install flask-wtf, a library for creating forms.  Type command "pip install flask-wtf" in the terminal and press enter. First of all create a secret key in your app.py file. This key is very necessary to operate the forms. You can assign any value to the key of your choice though. app.config['SECRET_KEY'] = 'dnbna' Once that is done, create a new html file, "forms.py" and import the required modules: Now create a class and initialize the variables, your forms.py will look something like this below: In the step above, we have created three variables for a Signup form and initialized them. Now to display these forms we need to have an html file. So create form.html and access the fields created above:  Now go to your main app.py file, where you create the routes for the application and import the signup form class...

Scraping through Multiple pages

  Step 1: Importing Libraries In the first step, we import the necessary libraries: BeautifulSoup, requests, and pandas. Step 2: Creating DataFrame Structure Here, we define the column names for our DataFrame and create an empty DataFrame. Step 3: Scraping Data from the Website We make an HTTP GET request to the URL, parse the HTML content using BeautifulSoup, and find all the <div> elements with the class 'p-4', which represent items on the webpage. Step 4: Iterating Through Items We loop through each item found on the webpage, extract the title and price of the item, and append them to our DataFrame. Step 5: Extracting Pagination Links We find the pagination section on the webpage and extract the page links. Step 6: Scraping Data from Multiple Pages We loop through each URL in the pagination links, form a new URL, send an HTTP GET request, and extract title and price data from each page. Step 7: Exporting Data to CSV Finally, we export the DataFrame to a CSV file named ...