I am trying to scrape information about renewable energy manufacturers, suppliers and companies in Europe on the following website: https://www.energy-xprt.com/renewable-energy/companies/location-europe/.
The first step is to collect urls of each company in the list but when I run a loop to scrape across pages I obtain links of the companies from the first page. My code looks like
link <- paste0('https://www.energy-xprt.com/renewable-energy/companies/location-europe/page-',1:78) result <- lapply(link, function(x) x %>% read_html %>% html_nodes("[class='h2 mb-0']") %>% html_elements('a') %>% html_attr('href') ) %>% unlist() %>% unique()
I expect to obtain a vector that contains urls of companies from all 78 pages