Sunday, March 31, 2013

Creeping up on OOP in Python - Part 1

Background and Spoiler Alert

Project Euler is a challenge set of problems, hundreds of them, that you probably need a program to solve each problem. You are allowed to write the program in any programming language of your choice. Some of the problems require reading an input which is given in a file linked to the problem description page. All of the programs produce an answer - a single numeric output (sometimes the answer is a very big number with many digits in it). The site poses the problems and you get to tell the site the number that is the answer. The site provides no guidance beyond feedback as to whether your answer is right or wrong until you give it the right answer. A right answer unlocks access to a bulletin board where you can share how you solved the problem. Lots of bragging there from folks about how few lines of code they used or how real mathematical insight made the program so much easier than it appeared to be at first blush.

My background is software development. I've worked with mathematicians, but make no claim to being a mathematician myself. So solutions that are founded on real mathematical insight are not the sorts of things to expect from me. Sometimes, armed with the mathematical insight explained on the bulletin board, I've found it worthwhile to revisit my solution to make my code better. Sometimes I revisit my code with a critical eye just to make it better to satisfy myself. I do miss having talented co-workers who can review code and make suggestions at the lunch table. But as a retiree, it falls on me to be my own reviewer. Fortunately, I have always had strong opinions of the quality of code and a thick skin, so I can provide my own harsh comments to myself and get some cleaner code out of the introspection.

This series of articles is going to look at Problem 22 of Project Euler and is, quite frankly, a spoiler. If you'd rather do it yourself without seeing somebody else's solution, do not open any of the links to the code samples here, and do not read the rest of this blog post if you consider description of the code of a solution to be off-limits too. Before posting this article, I Googled for:

project euler problem 22 solution

and can see I'm by no means the only person breaking the wall and revealing a solution on the Web.

My motivation for tackling problems in Project Euler was to learn the Python programming language. Shortly before my retirement, David James, then of the Bell Labs statistics department of the Murray Hill Math research center, had suggested to me that Python looked like a great programming language to learn. I had no reason to doubt that, but was knee deep in projects in other languages, so I didn't have time just then to stop and learn something entirely new to me. Now that I'm retired, I have time for such adventures. Sadly, except for whatever value there is in telling the tales of such adventures here around the campfire, it isn't exactly clear what good will come from my learning yet-another-programming-language. If nothing else, it has certainly shown to me that David James has an eye for value in programming languages. His recommendation of Python appears to have been well founded indeed. I'm still a long way from becoming an expert in Python programming, but I find it is certainly a pleasant programming language to use to solve problems. Python code is readable (unlike, say, Perl or APL) and tends to be quite compact (unlike Java). The language supports a broad range of programming styles and brings more possibilities for control structures than any language I've worked in except maybe for assembler language, where the language brings no real structure to the program and anything goes that you can dream up - but document your assembler code well so the folks who come later can figure out what you did!

Old School: Structured Programming

My roots are in "structured programming" from the days when "Goto considered harmful" was still considered to be a controversial position. Happily, I've long since accepted the truth of that position and so don't miss at all that Python has no goto statement.

Quite some time ago, I laid out a list of things I believed I needed to master to consider myself a Python expert. Perhaps surprisingly, the language itself is only a small part of the things to learn. I blogged my list in an article that started out to be about "literate programming", but that mutated as I created that article into my Python study guide.

So, my solution of Project Euler problem 22 started out as a small, reasonably clean structured program.

The problem statement for the problem is here:, complete with a link to the names.txt file that the program is to process.

By the way, I've been working with Python 2.7. Availability of libraries for Python 3 has been my reason for hanging back with Python 2. As time goes by, that presumably will become less and less of a good reason.

Wrapper Main Program

Since all of the Project Euler problems produce a single numeric answer, I base each of my Project Euler programs on a wrapper main routine that invokes the code for the problem, expecting the called code to return a value. The wrapper main program has a print statement to print the returned value and infrastructure to time the runtime for the called routine and print a report of the runtime. My code for the wrapper main routine can be found here: The wrapper main routine expects a single argument from the command line to tell it where the actual program code is located. The wrapper assumes that the program to be run is in the file named by the command line argument and that the name of the program to be invoked from that file is main().

I've not been entirely delighted with my wrapper main program. It reports elapsed time for the test, not CPU time. And the time routines from the library seem rather coarse in their granularity, but Project Euler's guidance on runtimes is only that the data for these problems shouldn't take more than a minute to process on a typical PC or you should rethink your algorithm. My timer code may be a little crude, but it can certainly show whether I've met that guideline.

I've written my program but should it take days to get to the answer?

Absolutely not! Each problem has been designed according to a "one-minute rule", which means that although it may take several hours to design a successful algorithm with more difficult problems, an efficient implementation will allow a solution to be obtained on a modestly powered computer in less than one minute.

If you really want to think about measuring time in your program, you ought to study this page:

The Rest of the Problem Solution

My first-version code for problem 22 is here:

There are commented-out print statements in the file that I used when I was doing my testing of the code.

One other tool that I learned to work with as part of my study of Python is the source-code-control tool git as provided by github. As I got used to the power of git, I found that the code really reads better if you remove the debugging print statements, not merely comment them out. But the edition of that I'm showing is from earlier in the process than my reaching that realization.

The description of the input file for Problem 22 says the input is a sequence of names, each enclosed in double-quotes and separated from each other by commas. Now Python is great at that kind of parsing of input, but I dis-like writing and debugging the fiddly code needed to carefully parse an input file, particularly if the parser is to make reasonable recovery from errors in the input's punctuation. So, I invoked the computer science principle that "there's no problem so big or so complicated that it can't be run away from". My escape hatch was to apply the pyparsing library module. pyparsing is, in my opinion, very Pythonic in its approach. The library allows you to define a grammar in terms of Python objects and then apply that grammar against your input. There are library alternatives that may give better performance, but I found pyparsing to be reasonably well documented and a joy to apply to the straight-forward problem at hand. I ended up having to write delightfully little code to solve the problem. (I concede that later, as I worked through CS101 in that I came to understand that Python may be good enough at parsing input strings that perhaps I should have just gritted my teeth and written the straight-forward code to parse the input file. But I do suspect that familiarity with pyparsing will be useful to me in the future if I ever am faced with parsing a more complex input structure.

This pyparsing library was my first serious encounter with the objected-oriented-programming (OOP) aspects of the Python language. Python is gentle in providing for such modern stuff, yet not forcing you to really grok the feature before letting you write reasonable software in Python.

The Devil is in the Details

Python is a relatively small programming language. As I mentioned, I strongly believe that the Python language itself is only a small part of what I believe I have to learn to be a Python expert. Nevertheless, there were still details of the language that I sort of stumbled into understanding as I worked this problem. I kept notes that I'll share with you here...

The ParseInput routine effectively has only 4 lines of code:

    def ParseInput(filename):
         from pyparsing import (delimitedList, QuotedString)
         return(list(grammar.parsefile(filename, parseAll=True)))

The define statement is the boilerplate to define a function. The from import () statement is where I import the pyparsing module from the library. My first try was to say:

     from pyparsing import *

which I expected would import all the "public" names from the module into ParseInput's name space. That shortcut is frowned upon as a change in the library's module could bring a name conflict into my function, but I figured it would save me the trouble of having to name every little thing I wanted to try. To my surprise, the import * was rejected as only usable at a module level. I think its saying that if I'm writing my own module, I can import * all the names in another module, but if I'm just inside a function, I have to be more specific. Rats, shortcut denied. Next try I said:

     from pyparsing import (delimitedList, QuotedString, parseFile)

but it objected that I couldn't import parseFile like that. delimitedList and QuotedString are classes defined in the pyparsing module. They are subclasses of the ParseElement class. The advantage of importing the names into my function are that I can refer to the names without having to qualify the name with the pyparsing module name. pyparsing.delimitedList is kind of long winded in composing a grammar. parseFile though is different. It's a method (invocable function attribute) of the grammar. I'd have to dig into the source code to be sure I'm saying this right, but I think that the grammar built from the parserElements is itself a parserElement, so I believe parseFile and its brother, parseString, are methods that can be invoked for a parserElement.

In the 3rd line of code, I creatively name my grammar, "grammar". delimitedList defaults to a list of comma separated items. The grammar for each item is the argument you pass to delimitedList, so I passed in QuotedString('"'). The argument to Quoted String is the quote character it is to look for. Default is that it strips out the quotes and just returns the stuff enclosed in the quotes. My first try I didn't know that QuotedString required an argument, so I coded it as QuotedString() and got a message about a class being invoked with one argument where it expected two.

The object-oriented stuff in Python always implies an argument of "self", the object on which the method is being invoked. So, my missing argument was the missing "2nd" argument.

I mistakenly reasoned that maybe I should just pass in QuotedString without the (). That got me an obscure (to me) message that I can't "combine type 'type' with a ParserElement". I eventually figured out what it meant. QuotedString is a subclass of Token and Token is a subclass of ParserElement, but QuotedString with no arguments is a class definition, a type. That is type(QuotedString) is 'type'. I tried


and got type(qs) is "QuotedString" and isinstance(qs, ParserElement) was true. Whew! Good thing this is all OpenSource, as I don't think I'd have made much progress at all if I couldn't follow Obiwan's advice and "Use the source, Luke".

delimitedList internally uses + to construct the grammar for what it is looking for. (pretty straight forward. It wants to find a match to the argument grammar, followed by a comma, which it suppresses [gobbles up], followed by another match to the argument grammar OR it will settle for a match to the argument grammar, not followed by the delimiter [to cover the last item in the list. No need for a trailing comma after the last item]. I'm oversimplifying as the list can have an arbitrary number of comma separated items in it, and optionally, you can name an alternative delimiter instead of sticking with the comma).

parserElement redefines the + operator to make it into an "and" concatenation of parserElements, but it didn't want to concatenate a type to a parserElement. Giving an argument to QuotedString causes an instantiation of the class so I have an object that is an instance of a ParserElement and so is acceptable to the overloaded + operator to concatenate that object to another ParserElement object.

The 4th line of code started out as:

     return(grammer.parseFile(filename, parseAll=True)

(The parseAll=True tells parseFile to throw an exception if the grammar doesn't successfully parse the whole of the input file. Default is it parses as far as the grammar will match in the file and leaves things such that another grammar can be applied next.

That return statement actually seemed to work, but the returned value wouldn't re-order when I invoked the sort method that is associated with lists. I used the type function to see what the type of the returned value was. It was a pyparsing.<something> type, which I'm guessing is a subclass of the built-in list type. It had no objection to my invoking a sort method on it, but it didn't sort the list. Solution was to explicitly convert the returned value to a plain old list using the list() function which you'll now find in that 4th line of code.

So 4 lines of code written, and I'm sure glad no one is measuring my lines of code per hour productivity.

nameScore Routine

The next function is nameScore. 5 lines of code that given a name computes a score for it. A=1, B=2, C=3, .... Score is the sum of the values for the characters in the name. Straight forward iteration over list(name). List knows how to bust a string up into the sequence of characters that I want to process one by one.

valuedict is a nifty Pythonic feature. Basically its a hash table. I wasn't quite sure where to define it, so I just put the initialization into main() and passed the valuedict as an argument to nameScore. Maybe if I was more object-oriented in my thinking, I'd have made nameScore into some kind of a class with an __init__ function that built that valuedict that stays with the nameScore object. In Part 2 of this article I'll go back and try it that way just to see what it looks like.

The names.txt file for this problem is fairly massive - 46KB containing 5000 names all on one very long line of text. For debugging, I constructed a smalllist.txt file that had only a few names in it. smalllist.txt is a small file in the syntax of the file given with the program. The real file had some 5000 names in it, all on one line. Looking at the discussion board for the problem, a disappointing number of people pre-processed the names.txt file (e.g. using Excel) to pull the words apart and alphabetize them. Some even left the words with the double quotes still there, just giving " a zero value in scoring the names.

main() Routine

That brings us to the main() routine. Line 30 runs ParseInput on the input file to get the list of names into nameList. Line 36 applies the sort method of lists to put nameList into alphabetical order. Line 46 to 50 initializes the valuedict. The given problem data only has upper case letters. I threw in values for lower case letters, numbers, and space and period so I could put "R. Drew Davis" into my smalllist.txt test data file. Lines 53-58 are the loop to tally up the weighted sum of all the name scores. And line 59 is where I returned the result. 14 lines of code in main(). Clear and straight-forward.

Object Lesson

The objective of this article was to get closer to understanding Object Oriented Programming and why it is useful in writing clean code. I haven't tried to provide a tutorial of the mechanics of using OOP in Python. Wesley Chun's book "Core Python Programming", 2nd edition, Chapter 13 is a decent reference for learning this stuff. Or if you are too cheap to invest in a reference book, dig into the Web with Google or your favorite search engine. There are plenty of web pages to be found that talk about Objects in Python.

In this article we looked at use of the pyparsing library's objects to define and apply a grammar. I haven't implemented an object and class on my own yet. We'll get to that in Part 2.

I'm still learning this stuff. If you spot anything that I've gotten wrong, please add a comment to let me know so I can correct both the article and my understanding of the topic. Thank-you.

Sunday, March 24, 2013

Danny Hillis: The Internet Could Crash - We Need a Plan B.

Here's another interesting TED talk. The speaker is Danny Hillis, the designer of the now defunct "Connection Machine", a very parallel computer with a lot of lights on the front panels. One of those machines managed to show up as a backdrop in one of the Jurassic Park movies. When I was managing a computer center at Bell Labs, we did consider purchase of a "Connection Machine", but it seemed that my users weren't really ready for something way different from the Sun servers and DEC-VAX machines that were the heart of our operation at that time. A "Connection Machine" seemed too risky in any case since the price tag was large and the company attempting to sell us the system was not well established and therefore could (and indeed did)go out of business in the blink of an eye.

Danny Hillis: The Internet Could Crash - We Need a Plan B

The TED talk was given in February 2013.

Reminds me of the proposal I made to split up the computer center in Murray Hill Building 5. The assumption had been that in the event of a disaster that took out our computer center there, that we'd fall back onto the facilities of the "Murray Hill Computer Center" in Building 2. That might have been a feasible plan when our Building 5 operation was small relative to the size of the Building 2 computer center, but technology and user demand in Building 5 led to enormous growth in storage and compute capacity. It was clear to me that Building 2's computer center didn't have the oomph we'd need if something happened to Building 5. I found a small raised floor area in Building 4 that was slated to be decommissioned and proposed that we move some of our "eggs" out of the Building 5 basket. I wasn't promising that we'd be able to transparently absorb the loss of Building 5, just that it would assure that our users wouldn't be entirely out of business if something happened to one of the buildings.

I was especially nervous about Building 5 because it has a wood-truss roof protected by a water-sprinkler fire protection system. One time, when a pipe in the ceiling burst, Rick, our lead computer operator, described it as "Rick's VAX Wash" as torrents of water poured down on one of our VAX 11/780's. Much to my disappointment, I couldn't stir up any interest in our users who fund our budget in splitting the computer center partially out of Building 5. Ultimately I left that organization for a less political and more technical position elsewhere with Bell Labs, Murray Hill.

But God saw to "meting out justice". That winter heavy snows covered the roof of Building 5, and then rain saturated the accumulated snow, but temps dropped and turned the whole pile on the roof to heavy ice that refused to slide off the arched roof, a significant static load. The load was too much for the wood trusses and one of them snapped during the night, shifting the load to its neighbors, which in turn similarly snapped. In the morning, the only visible symptoms were that the suspended ceiling tiles inside the building were strangely out of place. This led to the building people figuring out what was wrong and evacuating the building. Desks were set up in the main lobby of Building 6 so people could continue to have a place to work. The computer operators were issued hard hats and Building 5 was closed to all but "essential personal". Rick called it "closed to all but dispensable personnel" until the trusses could be repaired several months later.

I admit to some small pleasure in quietly thinking, see, I told you we needed a Plan B. Alas, I suspect that Hillis's clarion call for a Plan B Internet is going to get as little support as my call for a Plan B for the Building 5 computer center. If it happens at all, maybe the Plan B Internet will be a side-effect of the transition to IPv6?? But if the IPv6 Internet is as vulnerable to attack as today's IPv4 Internet, that isn't much of a plan B as it merely requires a second attack to take down the 2nd Internet. Should Plan B look way different from the traditional international Internet and just focus on smaller communities (per country or per state, with paranoid gateways to get from region to region)? I don't see how to do it "safely" without having stronger "central control" than the Internet model has today. Big Government central control (ala the "Great Firewall of China") makes me plenty nervous, so I hope the network gurus of the world have some better ideas.

Please add your comments to this blog if you have suggestions on how to get a Plan B in place. What would it's scope be? What would it look like? How would you test it?

Saturday, March 23, 2013

3D printers and gun control

Time to sprinkle a little variety into this blog. So today's article looks at the emerging supply of 3D printers and contemplates what they mean in the silly discussions of gun control going on.

Not sure if you've been paying attention to the topic of 3D printers. The idea is to have a device that can take a computerized design for an object and actually fabricate that object. As inspired by the replicators on Star Trek?? I do hear tell that 3D printers are among the things available on modern air craft carriers to fabricate parts needed for aircraft repairs.

There's more than one variety of 3D printer. Most common seems to be ones that extrude a fine plastic strand and move the head around much like an x-y plotter, slowly building up a 3D object, lowering the table as the object gain's height. For around $3K you can have one in your garage:

Print Real Objects With The MakerBot Thing-O-Matic 3D Printer (3 minute video)

MakerBot Replicator 2X Now Shipping!

But really serious hobbyists build the printer from scratch, not store-bought.

An interesting twist is that some folks have been working on how to "print" a working plastic gun. A working plastic large capacity magazine clip is far easier. They want to open-source the plans. Would the government dare to declare such information as "classified" in the name of gun-control? Wouldn't it be easier to improvise a law that made it illegal to kill people unless the killing is in self-defense? Or how about a requirement that if you are planning to kill someone, you first need to apply for a permit to do so? Of course, anyone who killed someone without having obtained a permit would be subject to punishment under the law. (7 minute video news report)

3D printed AR-15 can fire off 600 rounds (just over a 1 minute video news report)

There are other technologies. One plastic variant uses a liquid plastic that solidifies when exposed to UV light. So there's a pool of the liquid plastic and the xy plotter selectively hardens a layer of plastic at the surface and then the platform lowers down so the next layer of plastic can be selectively hardened atop the first layer.

Another technology uses powdered metal spread thin on the build platform. The xy plotter applies glue to the metal that is to stay. Then a next layer of metal powder is troweled on by the machine and then glue for the next level of the model. You eventually remove the unglued metal powder and fire the model to fuse the metal particles. It's still porous and fragile, so the next step is to bronze the model to make it pretty much solid metal. Plating is an optional extra step...

3D Metal Printer (5 minute video)

Of course, the big high temperature oven is not likely to fit neatly into your average home workshop. (Can it also be used as a smoker for ribs? But be sure to set it to low and slow before you put your wet hickory chips on the lower rack).

This one seems similar, but instead of glue it uses a laser to fuse the metal:

EOS 3D printer - Metal Parts (3 minute industrial advertising video)

A variety of metals are supported depending on the printer.

3d printing with metal, titanium & aluminum demo by EOS (1 minute video)

A long video that brings some additional perspectives to the topic of 3D printing:

The Future of 3D Printing (50 minute video).

So, as summarized by the Firesign theater on "I Think We're all Bozos on this Bus":

The future is fun! ... The future is fair! ... You may already have won! ... You may already be there!


In case you were wondering, I don't own a 3D printer and have no business interest in any of the companies or products mentioned in this article.

Addendum - 4/11/2013

Here's an additional video about one man's plans to open source the files needed to print your own guns. (24 minutes).

Friday, March 8, 2013

Curriculum Design with the End in Mind

As a person who enjoys reading, I'm heartened to see that interesting material about Education isn't limited to video's (e.g. TED talks). I was particularly impressed with this article from Grant Wiggins: "Everything you know about curriculum may be wrong. Really.". Thanks and a tip of the hat to Srdjan Verbić for sharing a link to that article on the Google+ STEM Community.

Wiggins points out that a curriculum is often conceived as an organized ordered progression from basics to advanced knowledge, but points out that to become a great soccer player, you don't plan to sit in long lectures about the rules and strategies of soccer. He asserts, quite convincingly, that the end goal of education today isn't to instill knowledge (heck, you can always Google up more facts on a topic than you're ever going to be able to remember anyhow), but to teach the students how to get things done in a field - to do things.

Once upon a time, it may have been the case that the world was sufficiently static that it might have made sense for a school curriculum to provide you with lectures on the basics, and work up to lectures on advanced topics, but in today's world, everything is changing so rapidly, that mastering the art of sitting through such a sequence of lectures and answering test questions at the end to demonstrate your knowledge of the lecture content is arguably akin to trying to use a classroom as the sole instrument to develop a soccer team.

I think he makes an excellent point. I'd like to believe that the interactivity of MOOC's is a shift toward a "how to get things done" approach vs. classroom lectures. One problem I see though is that if there is no clear progression from basics to advanced knowledge, how do you help a student progress from simple things to complicated things? Software development, for example, is a field where clearly there's a knack to being able to write software, but there is also a lot of background knowledge, (e.g. data structures, fundamental algorithms, development methodologies) that a teacher can't just assume the student will already know or can Google up as they need it. Seems to me that the shift isn't so much away from having to master increasingly complex stuff as it is a shift away from testing the ability to regurgitate explanations and instead to demonstrate application of the increasingly complex stuff.'s CS101 final exam, for example, is open-book and you can revise your answers as much as you like, so if it initially doesn't pass one of their test cases, you can fix it. At least on the test I took in Summer, 2012, the problems weren't simply cloned variations of problems we'd done in class. I think that's a Good final exam in a programming course. I believe Prof. Evans and the Udacity staff should be proud of what they put together for that Introduction to Computer Science and Python Programming course.

The Good Eats television show is another example. It is a show that teaches cooking. The bulk of the run time in the show is filled with lectures and demonstrations about cooking, but you are encouraged to try things out yourself. The lectures aren't focused so much on recipes as on explanations as to why the recipe calls for what it calls for. What role does the ingredient or technique play in the recipe. I'm not much of a cook, but I find the show interesting, more so than other cooking shows where an attractive cook just demonstrates preparation of a dish. I'd like to believe that the lessons from Alton Brown on Good Eats have improved my skills in the kitchen. I did apply what he taught to prepare a turkey and the result was good. It helps that the recipes are available on the web, so you don't need to worry about the details as you watch the show.

Not clear how a less project-oriented field, say "History", maps to this "learn to do it" approach to education. Maybe it's just a bad attitude on my part, but it seems to me that History is an example of a subject where indeed the end objective is to be able to sit though long lectures and then be able to regurgitate facts at the end. If the objective is to draw broad lessons from history and write compelling, well-supported essays in support of change, I'm not sure how big a field the study of history would turn out to be.

Particularly, if you are in some other field than me (software engineer), I'd sure like to hear how you think a bias toward "learn to do" vs. "learn the facts" would change your curriculum. Do feel free to add comments to this blog post. Or add a link here to your own blog post if you have something more to say than fits comfortably into the relatively little slots provided for making comments.

Friday, March 1, 2013

Marketing the Importance of Programming Education

At the risk of sounding like Dr. McCoy complaining to Captain Kirk in the original Star-trek TV series, I'm a software engineer, not a marketeer. At my request, the local high school and local BOCES school have announced the availability of a mentored edition of's CS101 at the "Yes We Can" Community Center here in the New Cassel section of North Hempstead, NY. My plan is to have weekly meetings of the class at 7PM, Monday, starting March 4, for a Scrum-inspired stand-up meeting where each student will say what they did on the course in the past week, and what they will do this week, and what impediments they see in their way. Other school nights I'll be available 7-9PM to answer questions and the students can put in whatever hours they want to at the community center or if they have the Internet at home, they can work on the course at home. Udacity provides little in the way of a "dashboard" for tracking the progress of the students in a class, so I'm hoping the Scrum-style stand-up meeting will enable me to track progress and figure out who needs extra attention. And, I hope it serves to persuade student's to live up to their announced plans and keep on keeping on. I've prepared a small set of slides to use at the first night's meeting.

But, as of Tuesday this week, the community center tells me no students have enrolled so far. I've nudged my contacts at the high school and BOCES school to see if we can get some students motivated to come check it out. My plan is to show up on Monday evening and bring a book to keep me busy if it turns out I'm there by myself.

Meanwhile,, a non-profit has appeared on the Web to promote the proposition that Every student should have a chance to learn to code. To get people's attention, they've posted to YouTube an "all-star" video in 3 editions. 1 minute, 5 minutes and 10 minutes. What most school's don't teach.

If you've been following my blog here this month, you may have seen the video from Jörn Loviscach, "MOOC's and other faster horses" that I had linked to my recent post "Education and Technology". Loviscach is one of the professors. He reported some scary statistics about how the MOOC's typically start out with hundreds of thousands of students, but a huge fraction (90%) typically drop the course. The classroom at the community center only seats about 12 students and even if I had an overwhelming demand for CS101, I don't think we could accomodate more than a Monday team and a Tuesday team. I don't want to get into a situation where we've overcommited the center's PC's so students who show up to work on the course can't get a seat. Bottom line is that unless I can influence that retention rate as part of my local mentoring of the class, I may not live long enough to see anyone actually complete the course. Hence my concern about "retention".

I agree that every student should have a chance to learn to code. Now if I can only manage to recruit and retain some students. I'll blog about how things go. So stay tuned, please. To paraphrase Scotty from Star-Trek, I may think of myself as a software engineer, but I'm a marketeer now. And if you know anyone in North Hempstead, NY, please encourage them to visit the Yes We Can Community Center on Garden Street and sign up for CS101.