works for one link, let’s run it on many links while read url; do curl -I $url 2> /dev/null | head -n1 | awk -v url=$url '{print $2, url}'; done < single_link.txt
STDOUT to a file Bonus: Talk about > vs >> while read url; do curl -I $url 2> /dev/null | head -n1 | awk -v url=$url '{print $2, url}'; done < many_links.txt > statuses.txt
stream of water (data) - Handle one line of input at a time - Easier to test - Easier to see the time when things go wrong - Don’t need to load entire file / batch to do work (don’t load the ocean, just stream it through a program)
• Do trial runs with • small inputs (1 link vs 1 mil) • print STDOUT to screen (vs a file) • Capture results in a text file • Save useful programs for later! Reading • Pragmatic Programmer • Unix Sys Admin Handbook • Unix Philosophy