Go fetch
This post is about a simple Go program that demonstrates how easy it is to execute tasks like fetching URLs concurrently. The program measures the time taken for the individual tasks and also the overall tasks so we can see that the tasks are in fact being executed in parallel.
First we look at the elements of the program that we need to understand before we look at the complete program.
You get the current time by time.Now()
from the time
package.
Get time interval from t1
until now by time.Since(t1)
which gives time in nanoseconds.
To get the time in seconds you have to call Seconds()
method as in time.Since(t1).Seconds()
To make a HTTP GET call use Get()
function from http
package as http.Get(url)
.
The result is a tuple of two variables.
First is the resp of type *http.Response
and the second is error.
To get access to the http response use Body of *http.Response
.
To copy from src to dst use io.Copy(dst, src)
.
To discard, we can copy into something equivalent to /dev/null
by setting the destination to ioutil.Discard
of type ioutil.devNull
.
Declare and intialize a channel called mychannel
that can send or receive strings by mychannel:= make(chan string)
.
Put somestring
into the channel by mychannel <- somestring
Receive from channel by somestring <- mychannel
Declare a parameter of function to be a channel for strings by mychannel chan <- string
.
Now that the different bits make some sense, it’s time to look at what the below program does. The program takes a url slice( ~ array) and for each url it makes a HTTP GET call. It counts the time taken for each HTTP call and the total time taken. For each url, it starts a separate go routine. The go routine puts the results whether happy path or error response into a channel. Finally, it receives everything from the channel. The program ends when all expected responses are taken from the channel.