Something went wrong. Try again later

pYr0rAGE

This user has not updated recently.

5 0 0 0
Forum Posts Wiki Points Following Followers

pYr0rAGE's forum posts

Avatar image for pyr0rage
pYr0rAGE

5

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

Avatar image for pyr0rage
pYr0rAGE

5

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

These API calls all return the same thing:

https://www.giantbomb.com/api/reviews/?api_key=<KEY>&sort=reviewer:asc&limit=2&offset=0&format=json
https://www.giantbomb.com/api/reviews/?api_key=<KEY>&sort=reviewer:desc&limit=2&offset=0&format=json
https://www.giantbomb.com/api/reviews/?api_key=<KEY>&sort=reviewer:desc&limit=2&offset=1&format=json

I'd imagine these to all give different results. Using the same set of parameters for /user_reviews/ seems to work. Is this a known issue?

Avatar image for pyr0rage
pYr0rAGE

5

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

Hello

I am a machine learning enthusiast, and I am interested in using the GB dataset to experiment with building recommendation systems via deep learning. While I could build a scraper to pull the pieces of information I need from the web api (while being respectful of the rate limits :) ), I was wondering if there was a static dump of the dataset available for download. If there was, that could potentially save some stress on the server, as there would be one fewer person querying. Does anyone know if this exists, or should I just write a scraper using the web API?