All Magic comes with a Price

Brilliant article.


Combining CSV Files and Verifying Line Count

1500 .csv files bogging down an import system due to the number of files.

Combine files into 1 file.

Problem #1:
All files have a header row with field names.

Solution #1:
Combine all the files while stripping off the

find . -name "*.csv" | xargs -n 1 tail -n +2 > big.csv

Problem #2:
I’m not sure if I have all the lines I should.

Solution #2:
Verify number of lines in the file and compare to row count from a database.

sed -n '$=' big.csv

Problem #3:
This removed the header row that I need at the very top of the file.

Solution #3:
Create file (header.csv) with a single line containing the field names that we stripped off earlier.
Then take header and big and combing them into our final csv file:

cat header.csv big.csv > final.csv

A Vision for Teaching Children

To instill a lifelong love of learning.

By encouraging curiosity, question-asking and interruptions.

By setting up fences and borders for the learning experience
wide enough they are never seen.

Because gentle guidance keeps them approximately in the center
of their thousand-mile-wide educational experience.

Without focusing on results, and instead focusing on the person
so their curious nature is not crushed
and their intellectual boundaries are not artificially limited
through the drive to achieve arbitrary results
to satisfy a pointless numerical goal
that has nothing to do with actual intelligence or learning.

Understanding that the result of a fully engaged student
who has been taught to love learning
who has been allowed to ask unrelated questions
who has never heard, “you are special” nor “you are ignorant”
despite being both
and who has been guided without visible boundaries

Will be a person who will produce amazing results
who will never stop learning
who will never know any intellectual limits
who will be able to create anything
and will never cease to amaze the world.

The Mac equivalent of the Apple ][ speed command

How to slow your connection down, or the Mac equivalent of the Apple ][ speed command.

found here

Configure a pipe that has the appropriate bandwidth limit and delay.
sudo ipfw pipe 1 config bw 16Kbit/s delay 350ms

Attach it to all traffic going to or from port 80.
sudo ipfw add 1 pipe 1 src-port 80
sudo ipfw add 2 pipe 1 dst-port 80

Now traffic coming from or going to port 80 anywhere is limited by the pipe that you specified. Do your testing and once you get frustrated with slow access to the web remove the rules like so:
sudo ipfw delete 1
sudo ipfw delete 2

Finally, delete the now defunct pipe like so:
sudo ipfw pipe 1 delete


put together in one bash script – adjust the kbit/s and delay to suit your needs

sudo ipfw pipe 1 config bw 56Kbit/s delay 500ms
sudo ipfw add 1 pipe 1 src-port 80
sudo ipfw add 2 pipe 1 dst-port 80
read -t 1600 -p "Hit ENTER to restore"
sudo ipfw delete 1
sudo ipfw delete 2
sudo ipfw pipe 1 delete

How big is BIGINT?

As I was trying to decide if a bigint column would hold enough values to store historical data in a database, and storage problems aside, this helped me put it into perspective, and decide that, yes, for now, bigint would be big enough for me.

eighteen quintillion, four hundred forty six quadrillion, seven hundred forty four trillion, seventy three billion, seven hundred nine million, five hundred fifty one thousand, six hundred fifteen

Some perspective:
~ 1 trillion pennies (click for details) (Sear’s Tower, Empire State Building & others for scale)

~ 1 Quadrillion pennies (click for details)

~ 1 Quintillion pennies (click for details)


BIGINT provides 18.446 quintillion unique values.