Archive for the ‘Information Technology’ Category

October 23, 2010 0

Week 42 Predictions

By in Artificial Nerual Networks

Here are this week’s predicted vs. actual  graphs. I transposed my prediction graphs with this week’s Yahoo! weekly graph. I did this as I’m now pushing more toward a business, “USA Today” look and less toward  “hard line” engineering output. I want to provide meaningful results for investors that they can easily use.

Overall my predictions are pretty good. I know from other experiments this week that the proper selection of inputs is critical to the prediction’s outcome. While I haven’t looked yet at two more securities I analyzed, I am nearly certain that they will “bomb.” My backtesting/numerical analysis showed that the neurals didn’t “get” the other two securities I analyzed with the inputs I supplied. In other words, if the neurals are going to be wrong, I can tell you ahead of time with fair certainty. Here are the two I knew would be right. I bet with more experimentation I will make them even better (hint: trend reversal signals):

October 20, 2010 0

PLUG Advanced Topics: Allison Randal

By in Linux

Allison Randal gave a presentation entitled, “Ubuntu Release Engineering” last night in Portland. During her presentation Allison gave some insights into how she coordinates the efforts of her developers and how the release cycle helps make sure the project members’ efforts are coordinated.

Allison displayed two graphs in particular that piqued my interest. The first was a sloping bar graph depicting the completed tasks in green and the incomplete tasks in red. The green area at the top increased in area while the red area decreased as time progressed. The graph was visual way of representing project progress over time. It also clearly showed how much still needed to be done. Instantly. Very nice.
The second graph showed the spacial relationship between the various projects in Ubuntu’s world. The graph was created automatically through scripts that perform natural language parsing the work orders. From that information an updated spatial view of the relationships between the projects is created.
You can tell that Allison cares about her people and the projects she oversees. She’s really conscious to make sure she has her finger on the pulse of the OSS community.

October 19, 2010 0

Why I Do It

By in Information Technology

The developers are huddled in a semi-circle talking excitedly about the potential new progress of their projects.

The idea that they can now track their projects–the wonder of their repositories accessed through the web-based project site–feels electric.

So it is when I showed my clients Redmine and what it meant to their productivity.

Getting people excited, providing leadership and helping them move forward is why I love IT so much.

It’s only the beginning–they’ll benefit from my years of experience–when we’re finished my client will have a every efficient and effective developer team that simply meets the customer’s needs.

September 29, 2010 0

Logical Volume Management

By in Information Technology, Linux

Logical volumes are logical constructs relating to mass storage devices. Logical Volume Management systems make several physical disks seem like one or more large storage areas.

I wrote a presentation handout on LVM as seen in the linked document entitled, Logical Volume Management.

September 29, 2010 0

Server Side Spam Filtering

By in Information Technology, Linux, Programming

I ran across a document that I wrote in 2003 outlining the spam filtering process in Postfix for a client I had at that time. I had fun working with this client, for reasons not the least of which as they were happy to let me release my documents under the GPL v2 license.

I have the PDF document linked here.

September 28, 2010 0

Update Databases Dynamically

By in Artificial Nerual Networks, Programming

I recently had to work with data for importation into a database that provided several challenges. Here’s the raw output:

Calculating indicator AroonUp[25, {I:Prices HIGH}, {I:Prices LOW}] …
AroonUp[25, {I:Prices HIGH}, {I:Prices LOW}][2010-05-03 09:01:00] = 68.0000
AroonDown[25, {I:Prices HIGH}, {I:Prices LOW}][2010-05-03 09:01:00] = 12.0000
AroonOsc[25, {I:Prices HIGH}, {I:Prices LOW}][2010-05-03 09:01:00] = 56.0000

About 45,000 or so of these records. My table should look like this:


|     date        |     time   | aroonup25 | aroondown25 | aroonosc25 |

| 2010-05-03  | 09:01:00 | 68.0000    |     12.0000      |  56.0000    |


I don’t know ahead of time which indicator the file will specify (i.e. up25, down25, osc25) nor do I know whether or not the record is already in the database.

Putting the correct values is especially acute as a) I need to ensure the date & time corresponds with the correct values (using UNIX’s ‘cut’ just won’t cut it, pun intended) and b) over the course of literally “hundreds of thousands” of records, performance becomes a real issue.

I wrote a script (listed below) that coordinates the indicator type, date, time, and value. The first step is to parse the input file to an output into database syntax. Here’s the intermediate step’s output:

INSERT INTO ARG (aroonup25, date, time) VALUES (‘68.0000’, ‘2010-05-03′, ’09:01:00′) ON DUPLICATE KEY UPDATE aroonup25=’68.0000’;
INSERT INTO ARG (aroondown25, date, time) VALUES (‘12.0000’, ‘2010-05-03′, ’09:01:00′) ON DUPLICATE KEY UPDATE aroondown25=’12.0000’;
INSERT INTO ARG (aroonosc25, date, time) VALUES (‘56.0000’, ‘2010-05-03′, ’09:01:00′) ON DUPLICATE KEY UPDATE aroonosc25=’56.0000’;

Now that I’ve created a file that contains the correct database syntax, all that’s needed is to import it into the database:

$ cat ./aroon.sql | mysql minute -u cstevens -p

While the script seems simple enough it’s actually a cornerstone to building a sophisticated method for storing indicator (in this case) data.

Here’s the script:




#break down the input file
for i in 25 #25 is the period length of the indicator (for future expansion to other time lengths)
cat $parse_file | while read line
#grab ’em (ticks), parse, and convert to lower case
awk -F'[‘ -v counter=$i ‘{print $1’counter’,$3}’| sed s/’] =’//g | tr [:upper:] [:lower:] > $tmp_parse_file

if [ -f $sql_file ]
rm $sql_file

# create sql file output
cat $tmp_parse_file | while read indicator date time value
print INSERT INTO ARG ‘(‘$indicator’, date, time)’ VALUES ‘(‘”‘”$value”‘”, “‘”$date”‘”, “‘”$time”‘”‘)’ ON DUPLICATE KEY UPDATE $indicator=”‘”$value”‘”‘;’ >> $sql_file

Note that this script contains a couple of “hard coded” variables. I consider this poor programming practice. In this case, the hard coded variables are actual variables in my master script. I will run through the necessary changes as I import this code into the code tree.

September 25, 2010 0

Creating New Business Models

By in Information Technology

The other night I met a new friend. Gordon and I got to know each other. Invariably I asked him what he did for a living.

He told me that he translated Japanese text for businesses. He explained that he received manuscripts from a few firms for his translations.

“What kinds of manuscripts to you often get?” I asked. “It varies but I’ll often get business proposals. I think my favorites are newspaper articles, those are the most interesting.”

“Really?” I thought.”Say,” I said, thinking in ‘early-often’ mode, “what would happen if you scanned all the major Japanese newspapers daily and translated all the articles that are relavent to your clients?”

“Now that would be something,” Gordon replied, “I don’t think I would do it for myself but my PR firms would love a service like that. They could outsource all the newspaper work to me.”

In technical terms, we’re talking about ‘push‘ technology. In this instance, Gordon’s IT system scans the major newspapers each day for items that are in his PR firm’s interests, translates the content, then pushes the translation to the PR firm for delivery. The advantages of this model are:

  • Steady revenue flow through contracts
  • Access, even though indirectly, to potentially hundreds of clients
  • Fewer clients to work with

Then, of course, we talk about the idea of an automated translation algorithm to help out with the translations.

September 23, 2010 0

Spammers and the Mothers Who Love Them

By in Information Technology

Spam messages purported to come from a person on your contact list is becoming a problem. You receive a message, look at the sender and think, “oh, I received a message from my friend.” The trouble is that instead of a friendly greeting you receive a (sometimes offensive) advertisement from a mass-mailer, a.k.a. a “spammer.”

This happens because either a) the sender’s mail was infected by malware on their PC or b) the message was “spoofed” with your friend’s email address by the spammer using an infected computer that happens to have your friend in their contact list.

Unfortunately, the end result is the same, often too is the solution: the sender needs to change she or he’s email address.
To track down the problem, first look in your “Sent Items” folder to see if your account actually sent the message. This happens as infected computers (the sender’s laptop, etc.) was “taken hostage” and is now sending emails out to recipients in the sender’s contact list. If you (the sender) find that these messages were sent by your computer then you’ll need to disinfect your system.

Second, make sure your login credentials are strong: change your password to at least seven characters with at least a number (not at the end) and a special character (%&*#) or two. Look to change your login’s secret questions and answers also.

Third, as mentioned above, scan and disinfect your machine if you find that the messages were sent from your account.

Finally, it is best to change your address. Let everyone on your contact list know that you’re now using a new address.

For parting thoughts I was to write something like, “overall, spammers cost us millions… etc.” but in the final analysis, it’s more than that–spammers are a nuisance, they are sometimes even hurtful. They need to be dealt with and stopped.

September 17, 2010 0

Ride Maps

By in Cycling, Information Technology

Ride Maps

Here’s a sampling of the ride maps created via GPS…

This topographic map features color coding by elevation:

It’s all about speed, after all. Here’s a route map color coded to pace:

Here’s an elevation profile:

September 16, 2010 0

Sattelite Photos with Tagged Image Locations

By in Programming

Have a GPS and a digital camera and would like to have a map depicting where you took your images? No problem.

Here are two scripts I wrote (one in Shell, the other in Ruby) that grabs a satellite photo based on your GPS’s data and places a point that corresponds where you took a picture. This script synchronizes your track log and your images’ header information to pull up a photo showing where you took each image. Here’s a sample:

The scripts require GPS drive and geo-map.

Main Script