From June 2013 through mid-February 2014, I have worked almost exclusively on rewriting our town’s water billing system. It works in conjunctions with our AMR system. That system serves as a store and forward mechanism, which sends meter configuration changes to our hosted application, and also collects daily reads for every water meter in our “district”.
We chose Perl for the implementation language, but even more important than the implementation language were the choices we made to avoid special casing and to make things flexible. Here is one example, a meter that measures the amount of water your home or business uses, is completely different from a fire service line. A fire service line is required to provide a separate source of water to fire suppression sprinklers.
Despite these differences, our software treats these two entities similarly. Every meter has a row in the meter table, and every fire service also has a row in the meter table. With this, software can go look up the charge for a fire service line, just as if that line were a meter. This avoided special casing in the main software, and, what I’ve found over the years is when data can be treated similarly, there are fewer checks and things that can go wrong in software. One way of looking at this is the special casing moves from conditional testing in software to the data itself.
In the case of a meter or a fire service, the data contains the decision-making. A column in the meter table contains a value, that if not present means the data represents a meter. If a known value is present, the data represents a fire service line. There are fewer checks in the software.
This isn’t a unique discovery. I certainly didn’t invent it. It just seems hard to practice when you are under a tight deadline for a project, but it seems to pay off handsomely at the end.
*when used for a municipal programming project
Since June of 2013, I have been working on re-writing my town’s water billing system. This project has included rewriting part of the meter configuration and water billing systems, originally written using Informix 4GL. So this project has included writing new software in Perl and re-writing existing software in Informix 4GL.
The greatest aspect of using Perl — and I can see parallels in Clojure lists, vectors, and maps — is Perl hashes are very flexible. Flexibility is very important in a municipal project. Requirements tend to be come in on the fly, and water consumption and billing rules are certainly not, as folks like to say rocket science, but there are a lot of rules, like what happens when a water meter’s digits rollover to 0000, or you replace a meter and need to calculate consumption based on the old and new meters’ reads. And, most of these rules are documented in one place, the software. And you can only read those rules, if you own the software.
Your programming language needs flexibility, like the ability to cache away a meter reading value that will be displayed on a bill, as opposed to a meter reading value that can be subtracted from another value in which a meter rollover may or may not have occurred. As we now get ready to approve and mail out the first water bills with 3-tier rates, billing for fire service lines (if you have one), as well as the usual administration charge, I credit the [almost] success of the project to the ability to be flexible and modular in our programming language.
This post is not intended to be a contest of one language versus another. We are using Perl for a water project for a number of pragmatic reasons, like folks working on the water project who do not know Clojure and Perl’s DBI support to name two.
Instead, this post is about something I’ve noticed, having used Clojure on a few small projects, and now returning, like Kellog’s Corn Flakes, to Perl again for the first time (since 2000, and then again in 2003).
I am re-writing three major Informix 4GL programs in Perl. They rely heavily on Perl’s DBI, and hence need to store a lot of intermediate data. So, there are quite a few module and subroutine scope variables. For me, the striking difference between Perl (and perhaps languages like it, including 4GL, VB, and so on) and Clojure is my Clojure programs don’t seem to need variables, other than global vars and data bound in let statements.
I believe my appreciation for Clojure’s immutable data is as full as it can be for someone who has worked with the language for a couple of years, so I appreciate that you cannot initialize a variable and modify it. The design of my Clojure programs always was different, despite the fact I could have bound a lot of let variables wherever I needed them, but I just never needed to do that.
The data seemed to come and go. My Clojure programs read in data; manipulate that data, make network I/O calls using that data; and then write some of that input data and new data out to disk.
When I first started learning Clojure, the luminaries said my views of designing would change, and I believe they have.
I don’t know why, but my Perl programs have always seemed to lend themselves to lengthiness over brevity. That is not the fault of Perl itself. Part of my problem is [re-]learning a new language again for the first time, and keeping all the code in one place for easier debugging, and part of it is I am only just seeing what could be possible in the ability to use both array and hash references.
For example, like variables could be created in a subroutine and returned as part of a hash ref or array ref. Variables can be declared as needed, something I do not remember as part of Perl, when I used it thirteen and ten years ago, respectively.
As I write several new programs for new rate structure and quarterly water billing, the code will get cleaner.
Programmer’s care about their editors. I used Lugaru’s Epsilon for a while, and think both it and Emacs are wonderful editors. A few years ago, I just started using vim and gVim. I could blame getting Linux certification, but I just started using vim, and the rest is history.
Most recently, I ran into an annoying problem. gVim and vim both started writing Unix file formatted files out as dos format using ftp to a Linux system. I had just recently rebuilt my 32-bit Linux workstation on CentOS 6.4 to be a more compatible environnent with our production and test environments. (It’s 32-bit, because our Informix SE tools and DB are 32-bit.) That is when the trouble started. vim started writing out unix-formatted files as dos-formatted files. I put a post into stackoverflow, and it was suggested that the .vimrc was wrong — specifically that
be added to .vimrc — but, after building the 7.4a Beta, the problem corrected itself.
This turned out not to be the case, so after putting in a bug report, it appeared the problem might be with the version of vim, so I retrieved the beta.
After unpacking the beta, I configured using these parameters:
./configure --enable-gui=auto --disable-gtktest
and then ran make followed by make install (as root).
For the past six years, I have used Ubuntu to do development and at home. Most of the Informix tools installed on it easily, and although I had to make a few adjustments from our production RPM based environment, things ran well. Well, now I have to get all my Informix tool that are installed on our production system working, in order to run the Perl Informix DBI, Bundle::DBD::Informix. It isn’t going to happen on Ubuntu.
To be fair, this isn’t Ubuntu’s fault, but Ubuntu’s massive UI change — massive at least to me — along with a kind of removal of the user from deeper configuration access — makes the transition back to RPM-based development systems easier to take. The IBM/Informix tools, especially the Perl Informix DBI won’t install on Ubuntu, or, at least neither a consultant nor I can get those tools installed. And, with an emphasis on Perl development now, an RPM system where Bundle::DBD::Informix will install is paramount.
I started using Ubuntu in 2006. I liked the way it installed, as well as the user interface. You could easily install server components, like LAMP. You can still install server components on the latest Ubuntu, but that nice simplistic server and crisp user interface dividing line seems to have blurred and gone way. The unity interface, at least to me, is just plain strange.
While the world is moving towards tablets and smart phones, are these going to be tomorrow’s preferred development environments? Is having a good monitor, keyboard, mouse, and traditional pedestal desktop a thing of the past? While tablets and smartphones might make good Tricorders of today, they don’t seem like good development tools, at least to me. I am referring of course to Ubuntu/Cannonical’s Mark Shuttleworth’s allegedly wanting Ubuntu to run on smart phones. I wish he would still focus on the conventional Linux system, and not encourage the UI design to go screaming off into the weeds.
Just my thoughts. I’ll miss you, Ubuntu. The African musical startup combined with the energy behind this distro was really something I will not soon forget.
Thirteen years ago, I was working as a contract employee in a large financial house. We worked on the Windows platform, and my boss needed me to catalog the internal URLs visited by our company’s substantial web application. Active State Perl was around then, so I installed that, and started learning Perl. Of the language’s great feature, I found hashes and the regular expressions easy to use. To store and then report on each visited URL, I designed a hash, so that each key was a URL, and its value was an integer that kept the number of visits to that URL.
Over the past ten years, I have used Perl on and off but never long enough to learn it well. As a result of our re-writing our water billing programs, that has all changed.
I have free PDF version of Higher Order Perl, but also needed an introductory book that could also be used as a dictionary. That book is Beginning Perl by Curtis “Ovid” Poe.
I have not gotten through the book yet, but I should have started reading it before asking questions on stackoverflow.com . The book covers a lot of things I found initially puzzling. I can also use the book as a dictionary, as opposed to a tutorial, and trying out something and looking up its details is how I learn best.
The book is well, written. It is paced well, points out pitfalls, and has plenty of examples. I like Poe’s writing style. He clearly knows a lot, but does not hit the reader over the head with this fact.