Skip to content
Advertisement

Java Web Scraper project is returning null instead of normal links

Used maven for htmlunit dependency for the webscraper. The main issue is that my scraper returns null instead of links. I made an item class to set and get.

import com.gargoylesoftware.htmlunit.WebClient;
import com.gargoylesoftware.htmlunit.html.HtmlAnchor;
import com.gargoylesoftware.htmlunit.html.HtmlElement;
import com.gargoylesoftware.htmlunit.html.HtmlPage;
import java.util.List;
public class Scraper {

private static final String searchUrl = "https://sfbay.craigslist.org/search/sss?query=iphone%208&sort=rel";

public static void main(String[] args){
        WebClient client = new WebClient();
        client.getOptions().setJavaScriptEnabled(false);
        client.getOptions().setCssEnabled(false);
        client.getOptions().setUseInsecureSSL(true);

        HtmlPage page = client.getPage(searchUrl);
        List<HtmlElement> items = page.getByXPath("//li[@class='result-row']");
        for(HtmlElement htmlItem : items){

             HtmlAnchor itemAnchor = ((HtmlAnchor)htmlItem.getFirstByXPath("//a[@class='result-image gallery']")); //itemAnchor gets the anchor specified by class result-image gallery//
             Item item = new Item();
             String link = itemAnchor.getHrefAttribute(); //link is extracted and initialized in string//
             item.setUrl(link); 
             System.out.println(item.getUrl()); //why don't you work//

}

}

Result: basically a line of null going down

*note: Putting System.out.println(link) returns one link and reuses that same link as it prints new line, in this case it would be just the link ‘https://sfbay.craigslist.org/sby/mob/d/san-jose-iphone-plus-256-gb-black/7482411084.html’ going all the way down

I’m a complete beginner in this cruel world. Any help is useful. edit: I’m going to include the dependency code here just in case, and the code for the Item class likely doesn’t need to be here as it is just a set and a get method outlined by setUrl and getUrl

        <dependency>
            <groupId>net.sourceforge.htmlunit</groupId>
            <artifactId>htmlunit</artifactId>
            <version>2.60.0</version>
        </dependency>

Advertisement

Answer

This works here

public static void main(String[] args) throws IOException {
    String url = "https://sfbay.craigslist.org/search/sss?query=iphone%208&sort=rel";

    try (final WebClient webClient = new WebClient()) {
        HtmlPage page = webClient.getPage(url);
        // webClient.waitForBackgroundJavaScript(10_000);

        List<HtmlElement> items = page.getByXPath("//li[@class='result-row']");
        for(HtmlElement htmlItem : items){
             HtmlAnchor itemAnchor = ((HtmlAnchor)htmlItem.getFirstByXPath("a[@class='result-image gallery']"));
             if (itemAnchor != null) {
               String link = itemAnchor.getHrefAttribute();
               System.out.println("-> " + link);
             }
        }
    }
}

producing something like

-> https://sfbay.craigslist.org/eby/pho/d/walnut-creek-original-new-defender/7470991009.html
-> https://sfbay.craigslist.org/eby/pho/d/walnut-creek-original-new-defender/7471913572.html
-> https://sfbay.craigslist.org/eby/pho/d/walnut-creek-original-new-defender/7471010388.html
....
User contributions licensed under: CC BY-SA
9 People found this is helpful
Advertisement