Amazon Price Tracker with python | With Source Code

Amazon Price Tracker with python | With Source Code

 Amazon Price Tracker with python | With Source Code



    Hi, Guys. Today we will see the Amazon price tracker project. If you are learning python then this Project is best for you because in this project you will learn more things in Python.

    In this project, you will also learn what web scraping is and how you can do web scraping because this project is on web scraping only.

    What is the amazon price tracker project?

    So guys, first we will understand the Amazon price tracker project then, We will understand how we can do this project.


    In the Amazon price tracker project, we take input from the user that the URL of the favorite product of the user and he plans to buy when its price has gone down.


    After taking that URL, We will Fetch the price, other details of that product by the URL.


    We will give an excel report of that price and other details of that product and we will also run our program on schedule every 24 hours.


    Then after running a program every 24 hours. we will compare after 24 hours price of the product and before 24 hours price of the product.


    If there is a difference it means the price of that product went down or up and we will see if its price went down then there is an offer and if its price went up then there is no right time to buy that product.


    Understand how we can implement code in python for the amazon price tracker project?

    So, I will divide this project into a few steps and we aim to complete all the steps one by one. Let's see all these steps.

    • The first step of the Amazon price tracker project is we have to take input from the user. We have to take the URL of the product that the user wants to buy when its price has gone down.
    • After taking input from the user, our second step is to open that URL on the browser then Fetch the HTML of that page that was open in the browser then We have to target the HTML ID of that product details we can see the HTML ID of that product details by inspect element. Then we have to take the data of that ID in python. 
    • You can do this task by two popular libraries for web scraping: Requests and BeautifulSoup. Requests help to open URLs in the browser and BeautifulSoup helps to fetch HTML and fetch data of products by targeting the HTML ID.
    • After getting data on that product we have to run that program every 24 hours and store the price of that product in one array.
    • Then we will compare the last two prices of that array and if there is a difference on the last two prices in the array that means the price of that product is gone up or down.
    • If the difference has come in positive numbers then it means its price has gone up and that is not the right time to buy that product and if the price has come in negative numbers it means there is an offer and it will right time to buy that product.
    • After seeing if there is an offer or not we have to visualize the details and offers in the excel report. We can easily do this task with one library of python that is Pandas.
    • This is important to give Excel reports because this is not the right way to show the details and offers on the python command prompt So, as a programmer you all have to maintain user experience while using that program.
    • The last but not the least step is to run our python program every 24 Hours Because this step helps to see if there is an offer or not. This step can easily be done by the Time module.

    So, guys after you understand this project you first try to do this project yourself then, If you did not can do this project then you can see the source code.

    Source Code:


    
    
    
    # importing libraries
    from bs4 import BeautifulSoup
    import requests
    import pandas as pd
    import os.path
    import time
    
    
    prices = []
    urls = []
    addurl = 'y'
    while 'y' == addurl:
        Url = input('Enter Your URL: ')
        addurl = input('If you want to add more URls Then type "y" else type any other key: ')
        urls.append(Url)
    
    
    
    
    f=0
    while True:
        data = []
        for url in urls:
    
    
            headers = {
                'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3'}
            # Making the HTTP Request
            webpage = requests.get(url, headers=headers)
    
            print('Webpage founded')
    
            # Creating the Soup Object containing all data
            soup = BeautifulSoup(webpage.content, "lxml")
            
    
            print('Webpage content founded')
            title = soup.find(id="productTitle").get_text().strip()
            print(title)
    
            # categories = []
    
            # for li in soup.select("#wayfinding-breadcrumbs_container ul.a-unordered-list")[0].findAll("li"):
            #     categories.append(li.get_text().strip())
    
            # categories = [i for i in categories if i != "\u203a"]
    
    
            # features = []
    
            # for li in soup.select("#feature-bullets ul.a-unordered-list")[0].findAll('li'):
            #     features.append(li.get_text().strip())
    
            try:
                price = soup.select("#priceblock_ourprice")
                price = float(price.replace('/xa', ''))
    
            except Exception as e1:
                price = soup.select("#priceblock_saleprice")
                print(price)
    
                # price = float(price.replace('/xa', ''))
    
            review_count = soup.select("#acrCustomerReviewText")[0].get_text().split()[0]
    
            availability = soup.select("#availability")[0].get_text().split()
            space = ' '
            availability = space.join(availability)
    
            print('data founded')
    
    
            lst = [title, price, review_count, availability]
    
    
            df = pd.DataFrame(lst)
            df = df.T
            df.columns = ["title",  'price', "review_count", "availability"]
            # print(df)
    
    
    
    
            data.append(df)
            print('print done')
            
            prices.append(price)
            print(prices)
        
    
        n_prices = [prices[i:i+len(urls)] for i in range(0, len(prices), len(urls))]
    
    
        offer = []
        
    
        try:
            
            price1 = n_prices[-2]
            price2 = n_prices[-1]
            res_list = []
            for i in range(0, len(price1)):
                res_list.append(price1[i] - price2[i])
    
            for j in res_list:
                if j < 0:
                    print('price was up')
                    offer.append('Costly')
    
    
                elif j == 0:
                    offer.append('No')
                else:
                    print('There is a sale')
                    offer.append('Yes')
                    print(offer)
    
        except Exception as e3:
            pass
    
    
    
        data = pd.concat(data)
    
        try:
            data['offer'] = offer
    
        except Exception as e4:
            pass
    
    
        a=1
    
     
    
    
        try:
            if os.path.exists("data.csv"):
                raise FileExistsError
            else:
                data.to_csv('data.csv', index=False)
    
        except Exception as e2:
            while True:
                if os.path.exists(f"data{a}.csv"):
                    a = a+1
                
                else:
                    data.to_csv(f'data{a}.csv', index=False)
                    break
        f = f+1
        time.sleep(86400)
        
    
    


    So, guys, I hope you understand this amazing amazon price tracker project after doing this project you can attach this project with your resume and you and also show this project to your friends. If you face any problem related to this project then you can comment to me.


    Post a Comment

    0 Comments