Programmer and (aspiring) software craftsman. Interested in FP, Clojure, OOP, software design and programming in general.
52 stories
·
0 followers

The Only Sure Thing in Computer Science

1 Share

The Only Sure Thing in Computer Science

Everything is a tradeoff.[1]

Works Cited

[1] Van Roy and Haridi. Concepts, Techniques, and Models of Computer Programming MIT Press, hardcover, ISBN 0-262-22069-5, March 2004

Read the whole story
manuelp
4013 days ago
reply
Universe
Share this story
Delete

Citation Needed

2 Comments and 3 Shares

I may revisit this later. Consider this a late draft. I’m calling this done.

“Should array indices start at 0 or 1? My compromise of 0.5 was rejected without, I thought, proper consideration.” — Stan Kelly-Bootle

Sometimes somebody says something to me, like something, a whisper of a hint of an echo of something half-forgotten, from the distant past, and it lands on me like an invocation. The mania sets in, and it isn’t enough to believe; I have to know.

I’ve spent far more effort than is sensible this month crawling down a rabbit hole disguised, as they often are, as a straightforward question: why do programmers start counting at zero?

Now: stop right there. By now your peripheral vision should have convinced you that this is a long article, and I’m not here to waste your time. But if you’re gearing up to tell me about efficient pointer arithmetic or binary addition or something, you’re wrong. You don’t think you’re wrong and wrong, that’s part of a much larger problem, but you’re still wrong.

For some backstory, on the off chance anyone still reading by this paragraph isn’t an IT professional of some stripe: most computer languages including C/C++, Perl, Python, some (but not all!) versions of Lisp, many others – are “zero-origin” or “zero-indexed”. That is to say, in an array A with 8 elements in it, the first element is A[0], and the last is A[7]. This isn’t universally true, though, and other languages from the same (and earlier!) eras are sometimes one-indexed, going from A[1] to A[8].

While it’s a relatively rare practice in modern languages, one-origin arrays certainly aren’t dead; there’s a lot of blood pumping through Lua these days, not to mention MATLAB, Mathematica and a handful of others. If you’re feeling particularly adventurous Haskell apparently lets you pick your poison at startup, and in what has to be the most lunatic thing I’ve seen on a piece of silicon since I found out the MIPS architecture had runtime-mutable endianness, Visual Basic (up to v6.0) featured the OPTION BASE flag, letting you flip that coin on a per-module basis. Zero- and one-origin arrays in different corners of the same program! It’s just software, why not?

All that is to say that starting at 1 is not an unreasonable position at all; to a typical human thinking about the zeroth element of an array doesn’t make any more sense than trying to catch the zeroth bus that comes by, but we’ve clearly ended up here somehow. So what’s the story there?

The usual arguments involving pointer arithmetic and incrementing by sizeof(struct) and so forth describe features that are nice enough once you’ve got the hang of them, but they’re also post-facto justifications. This is obvious if you take the most cursory look at the history of programming languages; C inherited its array semantics from B, which inherited them in turn from BCPL, and though BCPL arrays are zero-origin, the language doesn’t even support pointer arithmetic, much less data structures. On top of that that, other languages that antedate BCPL and C aren’t zero-indexed. Algol 60 uses one-indexed arrays, and arrays in Fortran are arbitrarily indexed – they’re just a range from X to Y, and X and Y don’t even need to be positive integers.

So by the early 1960′s, there are three different approaches to the data structure we now call an array.

  • Zero-indexed, in which the array index carries no particular semantics beyond its implementation in machine code.
  • One-indexed, identical to the matrix notation people have been using for quite some time. It comes at the cost of a CPU instruction to manage the offset; usability isn’t free.
  • Arbitrary indices, in which the range is significant with regards to the problem you’re up against.

So if your answer started with “because in C…”, you’ve been repeating a good story you heard one time, without ever asking yourself if it’s true. It’s not about *i = a + n*sizeof(x) because pointers and structs didn’t exist. And that’s the most coherent argument I can find; there are dozens of other arguments for zero-indexing involving “natural numbers” or “elegance” or some other unresearched hippie voodoo nonsense that are either wrong or too dumb to rise to the level of wrong.

The fact of it is this: before pointers, structs, C and Unix existed, at a time when other languages with a lot of resources and (by the standard of the day) user populations behind them were one- or arbitrarily-indexed, somebody decided that the right thing was for arrays to start at zero.

So I found that person and asked him.

His name is Dr. Martin Richards; he’s the creator of BCPL, now almost 7 years into retirement; you’ve probably heard of one of his doctoral students Eben Upton, creator of the Raspberry Pi. I emailed him him, to ask why he decided to start counting arrays from zero, way back then. He replied that…

As for BCPL and C subscripts starting at zero. BCPL was essentially designed as typeless language close to machine code. Just as in machine code registers are typically all the same size and contain values that represent almost anything, such as integers, machine addresses, truth values, characters, etc. BCPL has typeless variables just like machine registers capable of representing anything. If a BCPL variable represents a pointer, it points to one or more consecutive words of memory. These words are the same size as BCPL variables. Just as machine code allows address arithmetic so does BCPL, so if p is a pointer p+1 is a pointer to the next word after the one p points to. Naturally p+0 has the same value as p. The monodic indirection operator ! takes a pointer as it’s argument and returns the contents of the word pointed to. If v is a pointer !(v+I) will access the word pointed to by v+I. As I varies from zero upwards we access consecutive locations starting at the one pointed to by v when I is zero. The dyadic version of ! is defined so that v!i = !(v+I). v!i behaves like a subscripted expression with v being a one dimensional array and I being an integer subscript. It is entirely natural for the first element of the array to have subscript zero. C copied BCPL’s approach using * for monodic ! and [ ] for array subscription. Note that, in BCPL v!5 = !(v+5) = !(5+v) = 5!v. The same happens in C, v[5] = 5[v]. I can see no sensible reason why the first element of a BCPL array should have subscript one. Note that 5!v is rather like a field selector accessing a field in a structure pointed to by v.

This is interesting for a number of reasons, though I’ll leave their enumeration to your discretion. The one that I find most striking, though, is that this is the earliest example I can find of the understanding that a programming language is a user interface, and that there are difficult, subtle tradeoffs to make between resources and usability. Remember, all this was at a time when everything about the future of human-computer interaction was up in the air, from the shape of the keyboard and the glyphs on the switches and keycaps right down to how the ones and zeros were manifested in paper ribbon and bare metal; this note by the late Dennis Ritchie Richie might give you a taste of the situation, where he mentions that five years later one of the primary reasons they went with C’s square-bracket array notation was that it was getting steadily easier to reliably find square brackets on the world’s keyboards.

“Now just a second, Hoye”, I can hear you muttering. “I’ve looked at the BCPL manual and read Dr. Richards’ explanation and you’re not fooling anyone. That looks a lot like the efficient-pointer-arithmetic argument you were frothing about, except with exclamation points.” And you’d be very close to right. That’s exactly what it is – the distinction is where those efficiencies take place, and why.

BCPL was first compiled on an IBM 7094here’s a picture of the console, though thought the entire computer took up a large room – running CTSS – the Compatible Time Sharing System – that antedates Unix much as BCPL antedates C. There’s no malloc() in that context, because there’s nobody to share the memory core with. You get the entire machine and the clock starts ticking, and when your wall-clock time block runs out that’s it. But here’s the thing: in that context none of the offset-calculations we’re supposedly economizing are calculated at execution time. All that work is done ahead of time by the compiler.

You read that right. That sheet-metal, “wibble-wibble-wibble” noise your brain is making is exactly the right reaction.

Whatever justifications or advantages came along later – and it’s true, you do save a few processor cycles here and there and that’s nice – the reason we started using zero-indexed arrays was because it shaved a couple of processor cycles off of a program’s compilation time. Not execution time; compile time.

Does it get better? Oh, it gets better:

IBM had been very generous to MIT in the fifties and sixties, donating or discounting its biggest scientific computers. When a new top of the line 36-bit scientific machine came out, MIT expected to get one. In the early sixties, the deal was that MIT got one 8-hour shift, all the other New England colleges and universities got a shift, and the third shift was available to IBM for its own use. One use IBM made of its share was yacht handicapping: the President of IBM raced big yachts on Long Island Sound, and these boats were assigned handicap points by a complicated formula. There was a special job deck kept at the MIT Computation Center, and if a request came in to run it, operators were to stop whatever was running on the machine and do the yacht handicapping job immediately.

Jobs on the IBM 7090, one generation behind the 7094, were batch-processed, not timeshared; you queued up your job along with a wall-clock estimate of how long it would take, and if it didn’t finish it was pulled off the machine, the next job in the queue went in and you got to try again whenever your next block of allocated time happened to be. As in any economy, there is a social context as well as a technical context, and it isn’t just about managing cost, it’s also about managing risk. A programmer isn’t just racing the clock, they’re also racing the possibility that somebody will come along and bump their job and everyone else’s out of the queue.

I asked Tom Van Vleck, author of the above paragraph and also now retired, how that worked. He replied in part that on the 7090…

“User jobs were submitted on cards to the system operator, stacked up in a big tray, and a rudimentary system read, loaded, and ran jobs in sequence. Typical batch systems had accounting systems that read an ID card at the beginning of a user deck and punched a usage card at end of job. User jobs usually specified specivied a time estimate on the ID card, and would be terminated if they ran over. Users who ran too many jobs or too long would use up their allocated time. A user could arrange for a long computation to checkpoint its state and storage to tape, and to subsequently restore the checkpoint and start up again.

The yacht handicapping job pertained to batch processing on the MIT 7090 at MIT. It was rare — a few times a year.”

So: the technical reason we started start counting arrays at zero is that in the mid-1960′s, you could shave a few cycles off of a program’s compilation time on an IBM 7094. The social reason is that we had to save every cycle we could, because if the job didn’t finish fast it might not finish at all and you never know when you’re getting bumped off the hardware because the President of IBM just called and fuck your thesis, it’s yacht-racing time.

There are a few points I want to make here.

The first thing is that that, as far as I can tell nobody has ever actually looked this up.

Whatever programmers think about themselves and these towering logic-engines we’ve erected, we’re a lot more superstitious than we realize. We tell and retell this collection of unsourced, inaccurate stories about the nature of the world without ever doing the research ourselves, and there’s no other word for that but “mythology”. Worse, by obscuring the technical and social conditions that led humans to make these technical and social decisions, by talking about the nature of computing as we find it today as though it’s an inevitable consequence of an immutable set of physical laws, we’re effectively denying any responsibility for how we got here. And worse than that, Worse, by refusing to dig into our history and understand the social and technical motivations for those choices, by steadfastly refusing to investigate the difference between a motive and a justification, we’re disavowing any agency we might have over the shape of the future. We just keep mouthing platitudes and pretending the way things are is nobody’s fault, and the more history you learn and the more you look at the sad state of modern computing the the more pathetic and irresponsible that sounds.

Part of the problem is access to the historical record, of course. I was in favor of Open Access publication before, but writing this up has cemented it: if you’re on the outside edge of academia, $20/paper for any research that doesn’t have a business case and a deep-pocketed backer behind it is completely untenable, and any speculative or historic research that might require reading dozens of papers to shed some light on longstanding questions is basically impossible. There might have been a time when this was OK and everyone who had access to or cared about computers a computer was already an IEEE/ACM member, but right now the IEEE – both as a knowledge repository and a social network – is a single point of a lot of silent failure. “$20 for a forty-year-old research paper” is functionally indistinguishable from “gone”, and I’m reduced to emailing retirees to ask them what they remember from a lifetime ago because I can’t afford to read the source material.

The second thing is how profoundly resistant to change or growth this a field is, and apparently has always been. If you haven’t seen Bret Victor’s talk about The Future Of Programming as seen from 1975 you should, because it’s exactly on point. Over and over again as I’ve dredged through this stuff, I kept finding programming constructs, ideas and approaches we call part of “modern” programming if we attempt them at all, sitting abandoned in 45-year-old demo code for dead languages. And to be clear: that was always a choice. Over and over again tools meant to make it easier for humans to approach big problems are discarded in favor of tools that are easier to teach to computers, and that decision is described as an inevitability.

This isn’t just Worse Is Better, this is “Worse Is All You Get Forever”. How many off-by-one disasters could we have avoided if the “foreach” construct that existed in BCPL had made it into C? How much more insight would all of us have into our code if we’d put the time into making Michael Miguel Chastain’s nearly-omniscient debugging framework – PTRACE_SINGLESTEP_BACKWARDS! – work in 1995? When I found this article by John Backus wondering if we can get away from Von Neumann architecture completely, I wonder where that ambition to rethink our underpinnings went. But the fact of it is that it didn’t go anywhere. Changing how you think is hard and the payoff is uncertain, so by and large we decided not to. Nobody wanted to learn how to play, much less build, Engelbart’s Violin, and instead everyone gets a box of broken kazoos.

In truth maybe somebody tried – maybe even succeeded! – but it would cost me hundreds of dollars to even start looking for an informed guess, so that’s the end of that.

It’s hard for me to believe that the IEEE’s membership isn’t going off a demographic cliff these days as their membership ages, and it must be awful knowing they’ve got decades of delicious, piping-hot research cooked up that nobody is ordering while the world’s coders are lining up to slurp watery gruel out of a Stack-Overflow-shaped trough and pretend they’re well-fed. You might not be surprised to hear that I’ve got a proposal to address both those problems; I’ll let you work out what it might be.

Read the whole story
manuelp
4032 days ago
reply
Universe
Share this story
Delete
2 public comments
mihai
4035 days ago
reply
"Whatever programmers think about themselves and these towering logic-engines we’ve erected, we’re a lot more superstitious than we realize. We tell and retell this collection of unsourced, inaccurate stories about the nature of the world without ever doing the research ourselves, and there’s no other word for that but 'mythology'."
Cupertino, CA
brico
4037 days ago
reply
Or why yacht timing caused us all to zero-index.
Brooklyn, NY

The Careless Ones

1 Share

Two days ago my wife placed an on-line order at Walmart for a metal-frame bunk bed for our grandchildren to sleep on when they stay over at our house. Yesterday it arrived. Wow! One day delivery to my front door. Someone cared.

The metal-frame parts for the bunk bed came in a box that was, perhaps, 4ft X 18in X 6in, or 3 cu ft. It weight about 100 lbs. I looked at this carton with a growing sense of despair, knowing that the next several hours of my life would be in the hell of unpacking and assembly. I've been in that hell before. I had no desire to return to it. But I love my grand children and I wanted them to have a nice place to sleep. So...

I set my jaw and opened that box.

Have you ever seen the movie "Pulp Fiction"? Do you remember the briefcase? The golden glow? Well, it wasn't quite like that; but it was close. The innards of that box were packed -- perfectly. I had expected chaos. I had expected peanuts! (No! NO! Not PEANUTS!.... Run Away!....) Instead, I found a perfect rectangular prism of parts and well formed packing material.

The packing material was not that horrible, nails-on-chalkboard, land-fill-fodder, phosgene generating styrofoam. No, this packing material was blocks of thick corrugated cardboard. Tough. Recyclable. Form Fitting. Cardboard. And these cardboard blocks were perfectly shaped to fill any empty space in that carton.

It was a masterpiece of packing. I stood in awe of it. From the moment I opened that box, it was clear to me that someone cared.

What's more, as I gradually took the pieces out of the box I found that each piece was nicely wrapped in clear plastic, with little cardboard end caps for those pieces with sharp or oddly shaped ends. The parts were laid in the box in an orderly fashion that was very easy to understand. Removing the parts was trivial, and did not result in any clanging or banging or bumping. It just all came apart with a trivial amount of effort.

To the packaging engineers at Dorel: Nicely Done! It was clear to me that you guys cared about that packaging. You thought about it. You must have tried many different combinations before you settled on the final strategy. I want you to know I appreciate that.

You know how most kits come with a bag of parts? This bag is usually filled with screws, washers, nuts, bushings, etc. It often looks like some dufus in the warehouse reached into several dozen bins, grabbed a handful (or a fingerful) from each without bothering to count, dumped them into the bag, taped the bag shut, and tossed it into the box. Well, get this: The parts for this bed were shrink-wrapped onto a cardboard backing, that labeled each part. I could tell, at a glance, what all the parts were, and that all the parts were there.

I won't bore you with the story of the assembly of the bed. Suffice it to say that the bed went together with a minimum of issues. The instructions, if not perfect, were very good, and appropriately detailed. Overall, I think I made two assembly errors that I could blame on ambiguity in the instructions. Both of these errors were trivial to fix.

In short, I left that project feeling like I had been supported by a team of engineers who had thought about, and cared about, how to make the job of assembling that bed as easy as possible. Again, Dorel, Nice work!

And what about Walmart and Fedex? They fulfilled, shipped, and delivered that order in less than 24 hours. Nice going guys! Thanks for caring!


I upgraded to OSX 10.9 this morning. I sat down at my desk, ready to begin the day. My plan was to write this article. I've been thinking about it for several days. But up on my screen was the announcement of "OSX Mavericks" with a friendly button that said "Install Now".

I should have known. I should have known. But it was early, and I was sipping my coffee, and... Well... I hit that button.

The download began, and that was no big deal. The download was several gigabytes, so I expected it would take a few minutes. So, in the mean time I decided to create an account on healthcare.gov.

I had heard that healthcare.gov was having some trouble. But, since Mavericks was downloading, I figured I'd give it a try. After all, maybe I could find a better health insurance deal there.

The experience began well. The first few pages were nicely designed, and easy to read. They were informational and cheery. Then it asked me for my state, and I selected Illinois. The site came back right away with some more useful information about Illinois applications, and then presented me with a big friendly button that said: "Apply Now". I hit that button and then...

Well, the little spinner spun. And spun. And spun. So I checked on the OSX download. It was done and ready to install, but I didn't want to interrupt my application at healthcare.gov, so I decided to read some email and twitter and...

Healthcare.gov came back in a few minutes with a traditional account creation page. It wanted a user name and password. I could rant about the idiocy of websites that force you to put numbers and punctuation in your user names and password. I could. I could rant about that. Oh, yes, I could. But I won't. No. No I won't.

The Hell I Won't! Wont!

Dear website creators. Upper and lower case, numbers and punctuation, DO NOT INCREASE SECURITY. What number do people use? They use the number 1. What punctuation do they use? They use a period. Where do they put the number? At the end. Where do they put the period? At the end. What do they capitalize? Words! Do you really think theres a big difference between "unclebobmartin" and "UncleBobMartin1." Actually, there is! The difference is in my frustration level with your stupid website.

Anyway... I filled out the form with my appropriately numbered, punctuated, and capitalized user name, and my appropriately numbered, punctuated, and capitalized password. And then clicked the "Create Account" button.

I was quickly rewarded with another page that asked me for security questions. You know the kind. What is your mothers maiden name? What is your eldest sibling's height? How many times have you been arrested in Nevada? That kind of thing.

I chose three questions that were easy for me to answer. One of them was: "A personally significant date". That was easy. I was married on July 21, 1973. So I entered 7/21/73.

A little red sentence appeared beneath my cursor. It said: That entry is invalid!. My mistake was immediately evident. We don't use two digit years anymore, do we? Not since Y2K. Oh no. Now we use 4 digit dates. And we will, I suppose, until Y10K. So I entered: 7/21/1973. But, again, I got the little warning: That entry is invalid!

Hmmm. What could be wrong? Given that I am a programmer, I started thinking like a programmer. Perhaps the programmer was matching a regular expression like: \d{2}/\d{2}/\d{4}. So I tried: 07/21/1973. No dice. So I tried a number of other variations. No cigar. I spent 10 minutes or so trying to figure out what the demented programmer who wrote this code was expecting me to enter.

And then it occurred to me. The programmer did not write the questions! Some bureaucrat wrote the questions. The programmer never talked to that bureaucrat. The programmer never read the questions that the bureaucrat wrote. The programmer was simply told to display a set of questions from a database table, and to store the responses in the user's account. The programmer had no idea that this particular question was asking for a date! So the programmer was not trying to match a date! The programmer was just accepting any string.

Well, not any string. After all, I had been typing strings for the last ten minutes. No, someone had told the programmer (or the programmer simply decided on his own) that certain characters would not be appropriate in the answers to the questions. One of those inappropriate characters was probably: "/". I think numbers must also have been considered to be inappropriate since I had tried: 17 July, 1973. Or perhaps it was the comma. Who knows? Who cares? (Apparently not the bureaucrat, the programmers, or the people who tested this system.)

So I typed: My Wedding Day. And all was well!

I was quickly sent to a new page that told me my account had been created and that an email was being sent to me to confirm that I was who I said I was. I expected this. It's gotten to be pretty normal step nowadays.

So I went to my inbox, and there was the letter. And the letter had the expected confirmation link. So I clicked on that link, and my browser reacted immediately with a new tab on the healthcare.gov site.

"Wow!" I thought. That was pretty fast. Then I read the notice on the page in that new tab. It said: OOPS, you didn't check your email in time. Uhhh. Huh? I clicked on the link within 10 seconds of the email's arrival. Was I supposed to be faster than that? Should I have been poised with my finger on the mouse button just waiting for that email to show up so I could click it faster than lightning?

But then I noticed that the page also told me that If I had already confirmed my account, I could just log in using another link. So I tried that link, but got nowhere with that either. In the end I concluded that my account had not been created, and that the entire process would have to be repeated. (Sigh). Someone didn't care about my account. Perhaps lots of people didn't care about my account. I wonder why?


But then I looked over at the OSX installer and I thought, "Well, let's get on with this." So I clicked the install button. Up popped a warning box telling me that I needed to close all the other applications that were running. It gave me a button that would do that for me, and I dutifully clicked it.

One by one my applications melted away. Windows closed. Warning dialogs popped up requesting permission to close their respective applications, which I dutifully accepted. Click, click, click, down, down, down. Until there were just two left. The OSX installer window, and the Software Updater window that had told me about the new OSX version.

And that's where the process stalled. When I clicked on the installer's Continue button, it told me to close all the other applications. When I tried to close the only remaining application, the Updater, it told me it could not close until the current installation was complete.

This is a classic deadlock! Both processes were waiting for the other to complete. Neither could continue until the other finished. WHO TESTED THIS? WHO CARED ABOUT IT?

So, using my great powers as a software super hero, I managed to convince the Software Updater that it should close. And then the OSX installer took hold, restarted my computer, and entered that quasi-state, neither rebooted, nor halted, in which it does it's installs. This is the state you don't want to interrupt. Indeed, you can't interrupt it without powering down -- and powering down while it's in the midst of rewriting your operating system is seldom advisable.

And that's when it informed me that this process was going to take 34 minutes.

Note to Apple: Please have the courtesy to tell me, in advance, if something is going to take a long time. Don't let me start an irreversible operation without letting me know that my computer will be out of commission for the better part of an hour?

So I went upstairs and took a shower. When I returned the install process was nearly complete; and I happily watched my computer reboot.

Up came the windows, one by one. Email. Calendar. Chrome. Finder. UP, up, up. What a nice sight. And so I began to do my daily work.

And while I was typing away, up comes the Calendar app -- right on top of my current work. Calendar throws itself in front of me, grabs the keyboard focus while I am typing and then pops up a warning dialog: A new event has been added. Please choose the calendar for this event.

What event? You aren't showing me the event. I can't select a calendar unless I know what event you are talking about. What event it is? All I see is an entire month on my screen, a month full of events. Are any highlighted? No. Does the dialog name the event? No. What event it is? I can't tell. Ugh.

So I choose a random calendar to get the annoying dialog off my screen. I go back to the article I am writing. I read the last paragraph I wrote in order to reengage the flow of my writing and...

Up comes the Calendar app, right on top of my article. It's got another event to add. And, same as before, it doesn't tell me what event it is? So, once again I click on a random calendar to make the ridiculous app go away. I focus again on my article, reading -- again -- that last paragraph. Up comes the calendar app, right on top of what I am reading. It's got another event to add.

WTF? OK, I guess this upgrade of the OS is going to walk through all old events and add them all over again. Indeed, as I am reasoning this out, several dozen more warning boxes pop up telling me of new events, and demanding I assign a calendar.

So now, for the next twenty minutes, I am a slave to the calendar app, as it requires calendars that I have no context to supply. I simply hit the buttons by rote, assigning the unspecified events to whatever calendar is on top, hoping (against hope) that this does not destroy my carefully constructed calendar.

Of course the Calendar finally settled down -- though I expect it will rudely inject itself any time a new event is added by my assistant, or my wife, or... And, as far as I can tell, no damage was done to the events that were in my calendar, so the reasons for that furious storm of warning dialogs remains a complete mystery.


So just what is the moral of this Halloween Horror Story? The moral is that some people care, and some don't. Walmart cared about my bunk bed order. Fedex cared about delivering that bunk bed promptly. Dorel cared about packaging that bunk bed, and about guiding me to assemble it, and about it's overall structure and integrity.

Because these people cared, my grandchildren will have a place to sleep at my home.

Healthcare.gov did not care about my account, did not care about the answers to the security questions, did not care about response times. And, in the end, did not care about providing me with healthcare insurance.

Apple did not care about the deadly embrace between the Updater and the Installer, did not care to inform me about the installation time, did not care about the rudeness of it's Calendar application.

What did healthcare.gov and Apple care about? Schedule. Not me. Not my needs. They cared about their schedule.

I didnt' need a new OSX update today. It could have waited for a week, or two, or ten.

I do need health insurance. But healthcare.gov cared more about their schedule than about being the place where I buy my health insurance.

I find it sad that the careless people in this story are so obviously software people. Perhaps that's not fair since the Walmart website worked perfectly. Still, the carelessness was all on the software side.

Is that who we are? Is that how we want our industry to be viewed. Are we: The Careless Ones?

Now perhaps you think I'm being too critical. After all healthcare.gov is new, and OSX10.9 OSX10.8 is new. Shouldn't we cut them some slack because of their newness? I mean, they'll get the problems ironed out eventually. So, shouldn't should I just lighten up? up.

They don't have to iron out their problems on my back, thank you very much. If they cared, they could have prevented the trouble they caused me. They didn't care. They released software that they knew did not work properly and had not been tested enough. Nobody at healthcare.gov tested that damned date question, or if they did, they didn't care. The people at Apple had to know about the incredible rudeness of their calendar app; but they didn't care. They cared about their schedule not about me.

And that leads me to my question. What kind of organization do you want to be associated with? One that cares? Or one that is careless. And if you aren't in the right kind of organization, what are you going to do about it?

Read the whole story
manuelp
4044 days ago
reply
Universe
Share this story
Delete

Dance you Imps!

1 Share

There are several tools out there that promise to bridge the divide between objects and relational tables. Many of these tools are of high quality, and are very useful. They are collectively known as ORMs, which stands for Object Relational Mappers. There's just one problem. There Their ain't no such beast!

The Impedance Mismatch

It all started long ago, in the hinter-times of the 1980s. Relational databases were the big kids on the block. They had grown from their humble beginnings, into adventurers and conquerors. They had not yet learned to be: THE POWER THAT DRIVES THE INTERNET (TM); but they were well on their way. All new applications, no matter how humble, had to have a relational database inside them. The marketing people didn't know why this was true; but that knew that is was true.

And then OO burst forth upon the land. It came as Smalltalk. It came as Objective-C, and it came, most importantly, as C++, and then Java and C#. OO was new. OO was shiny. OO was a mystery.

Most creatures at the time feared new things. So they hid in caves and chanted spells of protection and exorcism. But there were imps in the land, who thrived by embracing change. The imps grabbed for the shiny new OO bauble and made it their own. They could feel its power. They could sense that it was good -- very, very good; but they couldn't explain why. So they invented reasons. Reasons like: It's a closer way to model the real world! (What manager could say no to that?)

Then one day, when the OO imps were just minding their own business, doing their little dance of passing messages back and forth to each other, the RDBMS gang (pronounced Rude Bums) walked up to them and said: "If you imps want to do your impy little dance of passing your stupid messages back and forth in our neighborhood, then your bits are ours. You're going to store them with us. Capisce?"

What could the OO imps say? The Rude Bums ruled the roost; so, of course, they agreed. But that left them with a problem. The Rude Bums had some very strict, and very weird, rules about how bits were supposed to be stored.

These rules didn't sit well with the imps. When the imps did their impy message passing dance, they just sort of threw the bits back and forth to each other. Sometimes they'd hold the bits in their pockets for awhile, and sometimes they'd just toss them over to another impy dancer. In a full fledged impy dance, the bits just zoomed around from dancer to dancer in a frenzy of messages.

But the Rude Bums demanded that the bits be stacked up on strictly controlled tables, all kept in one big room. They imposed very strict rules about how those bits could be arranged, and where they had to be placed. In fact, there were forms to fill out, and statements to make, and transactions to execute. And it all had to be done using this new street banter named SQL (pronounced squeal).

So all the OO imps had to learn squeal, and had to stack their bits on the Rude Bums tables, and had to fill out the forms and do the transactions, and that just didn't match their dancing style. It's hard to throw your bits around in the impy dance when you've got to stack your bits on tables while speaking squeal!

This was the beginning of the Impy Dance Mismatch between OO and the Rude Bums.

ORMs to the rescue, Not!

The next decade saw a huge increase in the political power of the Rude Bums. They grabbed more and more territory, and ruled it with an iron fist. The imps also gained territory; possibly more than the Rude Bums; but they never managed to break free of the Rude Bums' rules. And so they forgot how to dance. The free and lively OO dance they had done at the very beginning faded from their memory. It was replaced by a lock-step march around the Rude Bum's tables.

Then, one day, some strangers appeared. They called themselves ORM (the OutRaged Mongrels). The Mongrels had seen the free OO dancing of the imps before the Rude Bums' took control of them; and the Mongrels longed to see the imps dance free again.

So they came up with a plan. They would do the squealing! They would arrange the bits on the tables. They would fill out the forms and execute the transactions. They, the Mongrels, would stand between the imps and the Rude Bums and free the imps to dance once again.

"Oh dance free you imps, dance free!"

But the imps didn't dance free. They kept right on doing their lock-step march. Oh they were happy to have someone else take care of the nasty job of speaking squeal and arranging bits on the tables. They were happy that they didn't have to deal directly with the Rude Bums. But now, instead of marching around the Rude Bum's tables, they marched around the "Mongrels'" cabinets (which looked an awful lot like the Rude Bums' tables).

The Impy Dance Mismatch between OO and the Rude Bums had simply changed to the Impy Dance Mismatch between OO and ORMs.

Their ain't no such mapping.

An object is not a data structure. Repeat after me: An Object Is Not A Data Structure. OK, you keep repeating that while I keep talking.

An object is not a data structure. In fact, if you are the consumer of an object, you aren't allowed to see any data that might be inside it. And, in fact, the object might have no data inside it at all.

What do you see when you look at an object from the outside? You see methods! You don't see any data; because the data (if any) is kept private. All you are allowed to see, from the outside looking in, are methods. So, from the outside looking in, an object is an abstraction of behavior, not an abstraction of data.

How do you store an abstraction of behavior in a database? Answer: You don't! You can't store behavior in a database. And that means you can't store objects in a database. And that means there's no Object to Relational mapping!

OK, now wait!

Yeah? Wait for what? You want to argue that point? Really? You say you've got an account object stored in the database, and you use hibernate to bring it into memory and turn it into an object with methods?

Balderdash! Hibernate doesn't turn anything into an object. All Hibernate does is to migrate data structures that are stored on a disk (a what? You still using disks? No wonder you're confused.) to data structures stored in RAM. That's all. Hibernate changes the form and location of data structures. Data structures, not objects!

What is a data structure? It's a bunch of public data with no methods. Compare that to an object which is a bunch of public methods with no visible data. These two things are the exact opposites of each other!

I could get into a whole theoretical lecture on the nature of data structures and objects, and polymorphism, and switch statements, and... But I won't. Because the point of this article is simply to demonstrate that ORMs aren't ORMs.

What are ORMs?

ORMs are data structure migrators. That's all. They move data from one place to another while making non-semantic changes to the form of that data. They do NOT create objects out of relational tables.

Why is this important?

It's important because the imps aren't dancing free! Too many applications are designed such that the relational schema is bound, in lock-step, to the business objects. The methods on the business objects are partitioned according to the relational schema.

Think about that for a minute. Think about that, and then weep. Why should the message pathways of an application be bound to the lines on a E-R diagram? What does the behavior of the application have to do with the structure of the data?

Try this thought experiment. Assume that there is no database. None at all. There are no tables. No schema. No rows. No SQL. Nothing.

Now think about your application. Think about the way it behaves. Group similar behaviors together by responsibility. Draw lines between behaviors that depend on each other. Do you know what you'll wind up with? You'll wind up with an object model. And do you know what else? It won't look much like a relational schema.

Tables are not business objects! Tables aren't objects at all. Tables are just data structures that the true business objects use as a resource.

Moral

So, designers, feel free to use ORMs to bring data structures from the disk (the disk? You still using one?) into memory. But please don't think of those data structures as your business objects. What's more, please design your business objects without consideration for the relational schema. Design your applications to behave first. Then figure out a way to bind those behaviors to the data brought into memory by your ORM.

Read the whole story
manuelp
4065 days ago
reply
Universe
Share this story
Delete

"Imagine a system as immediate and tactile as a sketch pad, in which you can effortlessly mingle..."

1 Share

Imagine a system as immediate and tactile as a sketch pad, in which you can effortlessly mingle writing, drawing, painting, and all of the structured leverage of computer science. Moverover imagine that every aspect of that system is described in itself and equally amenable to examination and composition. Perhaps this system also extends out over the Internet, including and leveraging off the work of others.

…we want to upgrade our tools for documentation, so that the entire system can be read from within itself as an “active essay”…



- http://web.archive.org/web/20050406063507/http://squeak.org/about/headed-prev-vers.html
Read the whole story
manuelp
4091 days ago
reply
Universe
Share this story
Delete

Clojure and testing

1 Share
Occasionally I hear someone say that the Clojure community is against testing. I understand the sources of this – Rich Hickey (creator of Clojure) has spoken about testing in a number of talks and I think some of those comments have been misconstrued based on pithy tweet versions of more interesting longer thoughts. I’d like […]
Read the whole story
manuelp
4097 days ago
reply
Universe
Share this story
Delete
Next Page of Stories