Formula

Cidu Bill on Mar 12th 2013

circumference_formula.png
And the metatext doesn’t seem to be much help:
Assume r’ refers to the radius of Earth Prime, and r” means radius in inches

Filed in Bill Bickel, CIDU, comic strips, comics, humor, xkcd | 48 responses so far

48 Responses to “Formula”

  1. James Schend Mar 12th 2013 at 12:45 am 1

    The “joke” is that superscript is used for footnotes (like “the circle’s radius”) but also used for exponents. Get it? It’s like a word that has two different meanings! Hilarious.

    At least I think that’s it. If not, I’m totally stumped.

  2. Don Mar 12th 2013 at 12:46 am 2

    I think that’s it. The superscript “2″ is used as a footnote reference instead of as an exponent.

  3. MollyJ Mar 12th 2013 at 12:47 am 3

    That’s what I got out of it. Ha. Funny.

  4. Jen Mar 12th 2013 at 12:50 am 4

    I did FINALLY get the joke in the comic. The formula is 2 times pi times the radius squared. It’s written with the two that should be squaring the radius instead acting as a footnote informing the reader that r is the radius, therefore making the formula not work. I assume it’s a commentary on collaborative works, in which items can get transliterated incorrectly. The alt text is then a continuation on the theme, of mixing and mismatching notations.

    Having worked as a secretary in a state budget office, assisting in the assembling of technical documents intended for release to the general public, I can state with absolute certainty that this kind of thing can happen pretty easily.

  5. Don Mar 12th 2013 at 12:53 am 5

    No Jen, the formula for circumference is Pi times Diameter, or 2 times Pi times Radius. Part of the joke is deliberately mixing it up with the fomula for area (pi r squared)

  6. James Pollock Mar 12th 2013 at 12:58 am 6

    The circumference of a circle is pi times the diameter. The diameter is twice the radius, so the circumference is also twice the radius times pi, or 2 pi r. The AREA of a circle is pi times the radius squared.

    The joke is substituting one where you are expecting the other, and that is combined with the footnote/exponent thing noted above.

    “pi r squared” is etched into schoolchildren’s memory because of the feeble “Pie aren’t square! Pie are round!” pun, such that pi r squared is an automatic read. Having it turn out to be a footnote, thus, jars us out of what we expected. Then you stop and pay attention to the actual math, and you get the “I see what you did there.” moment.

  7. PeterW Mar 12th 2013 at 01:11 am 7

    I had to read halfway down the first page of the XKCD forum discussion to figure out that the 2 wasn’t an exponent but a footnote.

  8. The Bad Seed Mar 12th 2013 at 01:20 am 8

    I’m not a math geek (I’m a geologist), but this did make me smile. I guess it reminded me of engineers that I work with who obfuscate the facts more and more, the more they try to explain something.

  9. minorannoyance Mar 12th 2013 at 01:55 am 9

    Pie r round
    Cornbread r square

  10. Kit Mar 12th 2013 at 02:17 am 10

    FWIW, the metatext is a variant of the same joke.

  11. Cidu Bill Mar 12th 2013 at 02:23 am 11

    I hate the whole concept of comic strip metatexts. I don’t think I’ve mentioned that in a couple of weeks, so it’s oversue.

  12. Proginoskes Mar 12th 2013 at 02:54 am 12

    Overdue, Bill? “Oversue” sounds like what a lawyer would like to do.

    As for r’, that apostrophe is read as “prime” (so r’ is “r prime”). Double quotes are used for inches (6” = 6 inches, so r” = r inches).

    XKCD is “a webcomic of romance, sarcasm, math, and language.” Geek jokes, and probably not anything deeper. (So Jen (#4)’s point probably wasn’t intended.)

  13. The Vicar Mar 12th 2013 at 03:35 am 13

    @Proginoskes:

    Strictly speaking, the apostrophe is not read as “prime”, because it is an apostrophe. There is a completely distinct typographical mark for the prime symbol (and the double prime, and the triple prime, and the quadruple prime). The apostrophe is Unicode code point U+0027, the typographer’s (”curly”) apostrophes are U+2018 and U+2019, and the prime symbols are at U+2032 (single), U+2033 (double), U+2034 (triple), and U+2057 (quadruple). Any version of Mac OS or Windows since about 2000 will let you type them all correctly with relative ease. (Linux not so much, because Linux — despite its partisans yapping constantly about how everything is a stream of bytes — is still ultimately unsteady when it comes to anything other than pure ASCII. But if you use only very specific Linux programs which mimic what’s available on the Mac or on Windows, such as OpenOffice, you can probably handle the symbols correctly.) And don’t tell me “not everyone uses Unicode”, because just about everyone does at this point, even if some of them botch the finer points. (PHP, for example, screws up character encodings left and right because the language was designed, written, and maintained by a bunch of morons who demonstrate every weakness and stupidity of open-source software. And guess what popular blogging platform is written in PHP? Here’s a hint: CIDU runs on it.)

    In the same way, there are “curly” quotation marks and different size dashes for splitting up words (the hyphen), specifying a range of numbers (the en dash), or splitting phrases (the em dash), and they are available almost all the time.

    The reason people don’t use them is primarily a combination of western keyboards being pathetic and an unwillingness of people who learned to type pre-Unicode to learn anything new. (Similar to the way that people who learned to type on typewriters often put two spaces after every sentence, even though in most cases that isn’t the right thing to do on a computer, and ranges from ugly to pointless in effect.)

    Mac users actually have less excuse not to know this stuff, because the typographer’s quotation marks and apostrophes and the various sizes of dash have all been type-able from the standard U.S. Mac keyboard since about 1986 or so. (Option-[, Option-Shift-[, Option-], Option-Shift-], Option–, and Option-Shift–. Macs had the best single-byte character encoding, back before everyone went multibyte with Unicode.)

  14. Dave in Boston Mar 12th 2013 at 05:14 am 14

    No, U+0027 is not an apostrophe. U+0027 is a deprecated point for compatibility with ASCII. There’s some other value you’re supposed to use to get a ‘proper’ apostrophe. In the fantasyland that the Unicode people live in, you aren’t supposed to use it, and I’ve been told in so many words that when my source code stopped displaying correctly (because it contained that character) that it was a bug fix and that I should be grateful that now I had proper Unicode support. Of course, if you try to use the ‘proper’ apostrophe in source code, the compiler quite properly laughs at you.

    The reason Linux and Unicode do not get along is that Unix has always been based on a model where everything is a stream of bytes, and in Unicode fantasyland this is not allowed. The net result is that stuff doesn’t work.

  15. Stan Mar 12th 2013 at 05:34 am 15

    the Vicar @13 - Wow. Are you really bored at work, or should we be worried about you?

  16. Carl Mar 12th 2013 at 06:37 am 16

    I can testify that to a former physics teacher that was hilarious.

  17. UXO Mar 12th 2013 at 06:50 am 17

    Stan@14: We all have triggers that cause us to rant - The Vicar’s just (apparently) happens to be Unicode. I’m sure mine are equally obscure and pointless to anybody else. (I swear to CHRIST the next twit that changes lanes in front of me without signalling is gonna get his taillights taken out with a tire iron! I don’t care how fast you drive, just communicate your intent to other drivers! Is that too much to ask? Where the hell do these people get driver’s licences, anyway???)

    Carl@15: As an engineer, I agree.

  18. Kilby Mar 12th 2013 at 07:06 am 18

    I know how The Vicar feels (@13), because I can get pretty screedy myself when it comes to misused punctuation (see below). However, in this case I think there is no reason be so picky about the difference between “prime” and “apostrophe”, because the character was not typeset, it was contained in plain text, and there is no other character for “prime” in the standard ASCII used in the Alt-text.

    P.S. Apostrophe errors are endemic in Germany, for two reasons: (1) there is a “single quote” character that is more obvious on the German keyboard, so very few amateur typists ever bother to find the “real” apostrophe key; and (2) the abominable “curly quote” feature in MS-Word that “corrects” a standard apostrophe to a (German) closing (single) quote mark, so that it ends up in the proper place, but reversed in orientation. Book publishers are not affected, they have lectors who reset the apostrophes with the correct character, but one sees the “backward” apostrophes in all sorts of locations (signs, letterheads, etc.), whenever someone has used “DIY” typesetting services.

  19. Kilby Mar 12th 2013 at 07:15 am 19

    P.P.S. @ 15 & 16 - I had a physics professor who introduced us all to a new convention right at the beginning of the first semester: instead of denoting our vectors with a little half arrow above the letter (like we all did in high school), he said that the convention in his class would be to use a tilde below the vector’s designation. A simple rule change, but the punchline was in the reasoning: in published form, the convention is that vector names are set in boldface type, and there is a very old typesetting tradition that uses a wavy line below manuscript text that is to be set in bold. Therefore, to save a little work for future manuscript preparation, he wanted us to get used to putting a wavy line (or tildes) below our vector names.

  20. zookeeper Mar 12th 2013 at 07:24 am 20

    Dayamn - ya’ll left me in the dust here. But thanks for explaining - I needed help with that one.

  21. Bob in Nashville Mar 12th 2013 at 07:27 am 21

    In this case, the meta text is closer to being funny than the comic. The radius of the Earth stated in inches would be a really huge number.

  22. Kilby Mar 12th 2013 at 07:45 am 22

    P.P.P.S. I hate it when I get all screedy about punctuation (@18) and then screw up a closing token that leaves half a sentence italicized.

  23. fj Mar 12th 2013 at 08:23 am 23

    @4, 12

    Actually, I think that a variation of Jen’s point was intended. It is an observation about how overloading unary postfix operators obfuscates a language.

    This is a strip, after all, that is willing to use the phrase “context-free grammar” as the basis for a pun. (See http://xkcd.com/1090/ )

  24. mitch4 Mar 12th 2013 at 08:51 am 24

    Bill re #11 I recently noted the revived “Puck” moved to a new host, and among the features he had implemented and was proud of, a disturbing one was hover-text notes attached to the comic main image! Apparently even went back and added to old ones.

    Also, every Puck comic in existence now has added Alt-text goodness! That’s right! Just like the cool people in those other webcomics! Get an added chuckle (or wry observation from yours truly) by mousing over the comic. It’s not much, but’s it’s the little things in life that count, right?

    (From http://www.puckcomics.com/?comic=puck-177)

  25. Morris Keesan Mar 12th 2013 at 09:24 am 25

  26. Daniel J. Drazen Mar 12th 2013 at 10:02 am 26

    I used to read books onto tape for the visually impaired, so I remember some of the rules for describing equations, chemical formulas, etc. As a math layman, I at least knew that r” (r double prime) is NOT the same as r” (r with a quotation mark superscript). And r” does NOT “denote inches” since geometry basically doesn’t assign that kind of meaning to the numbers of a problem. That to me was the joke right there.

  27. James Pollock Mar 12th 2013 at 11:09 am 27

    “r” does NOT “denote inches” since geometry basically doesn’t assign that kind of meaning to the numbers of a problem.”
    Oops. At that point, you’re not talking geometry, you’re talking algebra. And while the common shorthand of ‘ for feet and ” for inches is usually used for constants, it can be used for variables.

  28. Withering Heights Mar 12th 2013 at 11:38 am 28

    The display of both math and Unicode geekery in one discussion is a beautiful thing!

  29. Arthur Mar 12th 2013 at 12:43 pm 29

    It might have taken me a bit to understand this, except I’m
    quite familiar with Mathmanship. Informative and entertaining.
    I don’t know where it originally appeared, but you can find
    it in “Stress Analysis of a Strapless Evening Gown”.

  30. The Vicar Mar 12th 2013 at 08:36 pm 30

    @Dave in Boston, #14:

    Your source is wrong. The current Unicode standard still has U+0027 listed as “APOSTROPHE”, albeit with a note saying that the right-hand typographer’s apostrophe at U+2019 is “preferred”. Nevertheless, it is a displaying code point and is not deprecated, so if the code doesn’t display it the code is just wrong.

    I’m guessing the person who told you wrong was a Linux developer, because incorrectly interpreting an external standard in a way which makes life harder for both developers and end users is typical open-source behavior. (Although I could still be wrong: you didn’t say that they were making an inferior ripoff of an existing product at the time. That would have made it 99.99% certain.)

    @Stan, #15:

    “Bored at work”? Look at the timestamp on my comment again and then YOU tell ME.

    @UXO, #17:

    Not Unicode, misused punctuation. It was bad enough back pre-Unicode when Windows users were constantly screwing things up either by using plain ASCII for no good reason or assuming that everyone else was using Windows Latin 1 encoding (which was doubly insulting because even foreign Windows users weren’t using it). To screw things up now, when it’s so easy to get them right, is almost insulting.

    @Kilby, #18:

    Hover text — and, incidentally, strictly speaking it’s the “title” attribute of the image, not the “alt”, although they may of course be the same, but it’s the “title” which gets displayed — can be anything in the Unicode character set. It’s just an HTML attribute; you might have to use character entities to encode anything outside of basic ASCII for the more feeble browsers, but a prime symbol should work just fine. And I refuse to believe that the guy who draws XKCD hasn’t tinkered with his system until he can deal with Unicode in his CMS.

    @Bob in Nashville, #21:

    Actually, I’m fairly certain you got the hover text jokes slightly wrong. It’s two separate jokes: the first one is that “r prime” refers to the radius of “Earth Prime”, from one of the many tedious superhero comicbook universe retcon justifications, and the second is making the same sort of joke as the main comic: that is, the symbol’s meaning is deliberately used in a way (quotation mark as “inches” symbol) which is not the one you would expect in the context of mathematics (second derivative). (In fact, I don’t think I’ve ever seen a quotation mark used to denote inches after a variable letter, have you?) I doubt the two jokes were meant to connect up.

    @Morris Keegan, #25:

    There is a very good pair of practical reasons why you should not put multiple spaces after the closing punctuation of a sentence on a computer. The first is that any decent non-monospaced font will add enough visual space after a full stop, question mark, or exclamation mark if you use one space. (Once again, open-source fonts generally do not qualify as “decent”, so Linux users may not get the same performance as everyone else.) The second is that in the context of a web page — which is where most typing is displayed these days — except in very specific places, excess white space is just stripped and replaced with a single space. You can put as many spaces as you like after your periods, but you’re just wasting time if they’re going onto the web. As with many other Sheldon strips, especially the ones with Arthur (yes, I read and enjoy the strip), the author just ignored any sort of reasoning on either side and went for the joke anyway.

  31. Elyrest Mar 12th 2013 at 09:39 pm 31

    ” Look at the timestamp on my comment again and then YOU tell ME”

    Timestamps on CIDU are always CIDU Bill’s local time - or in this case, since daylight savings time has started in the U.S. and this site is always slow in catching up, an hour early. If there’s a timestamp of 03:35 am it’s actually 04:35 am, but if you were posting from London it would really be 08:35 am. This is a reasonable times to be at work. Some people work nights too.

    * If my math is off it’s because I’m still confused by the time change. That’s my story and I’m sticking to it.

  32. Mark in Boston Mar 12th 2013 at 10:42 pm 32

    “The reason Linux and Unicode do not get along is that Unix has always been based on a model where everything is a stream of bytes, and in Unicode fantasyland this is not allowed. The net result is that stuff doesn’t work.”

    Unicode and streams of bytes are not incompatible at all.

    Unicode is abstract. Unicode is a code that assigns a “code point” (a number) to every character.

    A stream of bytes is concrete. All Internet communication is done with streams of bytes. A disk drive holds an array of bytes. Every modern digital storage medium holds bytes.

    Unicode must be converted to a stream of bytes using an “encoding” such as UTF-8 or UTF-16. Most of the ASCII characters use the same byte coding as their equivalents in UTF-8, so UTF-8 “looks like” ASCII. The ASCII control codes (0 through 31 in decimal) are not characters and have no equivalents in Unicode.

    If you are a software developer you are absolutely, positively required to know everything here: http://www.joelonsoftware.com/articles/Unicode.html . No excuses!

  33. The Vicar Mar 12th 2013 at 11:07 pm 33

    @Mark in Boston:

    The “stream of bytes” paradigm is what is holding everything back. “Stream of bytes” means that higher-level programs can’t be allowed to communicate in any high-level way because enforced structure cannot be permitted.

    Here’s an experiment you can (theoretically) do. Go find a Mac Classic (shipped 1990). Doesn’t matter if the hard drive is dead or you don’t have an old boot disk; you can boot that model into the version of the OS which was current when it shipped (System 6.0.3) by holding down Command-Option-X-O. Dig out a copy of MacPaint and a copy of either Microsoft Word or MacWrite, whichever you like.

    Now, go into MacPaint and draw something. Select it, choose “copy”, and then quit the program and launch Word/MacWrite, make a new document if necessary, and choose “paste”. You get: a graphic embedded into your word processing document.

    Delete the drawing. Now type something up, using different fonts and styles, select it all, and choose “copy” again. Now go back into MacPaint and choose “paste”. Voila! Your text, complete with styles, in your pixel-based graphics program.

    Now, Macs were doing this in 1990. (Actually, they were doing it as far back as 1984, strictly speaking, but the Classic is a convenient machine to test because of the ROM system software.) Windows took a lot longer, but eventually got there (Windows 98 was where this became reasonably reliable, although 95 had it mostly down).

    Now for the other half of the test: go into pretty much any current distro of Linux. Take any graphics program and any word processing program, other than programs which are integrated into the same larger package (such as OpenOffice), and try the same thing. The graphic won’t come through at all. The text will come through, but the style information will be lost. It has to be, because supporting graphics would require a structured clipboard, as well as a unified level of support and structure in the API — all things which Linux is missing because in Linux everything IS a stream of bytes. Attempts to introduce structure have been made (KDE and GNOME have both faintly realized that Apple and Microsoft are kicking their puffy white open-source butts in the end-user market even though they are giving their work away for free, and have been trying hard to provide structure, but quite aside from the fact that open source programmers generally can’t think their way out of a paper bag, the mere fact that GNOME and KDE are two separate projects providing two separate systems which won’t be available on other desktop environments for Linux means that there is little incentive to support either system and what little developer support there is ends up being divided between the two) but they have not been successful.

    The “stream of bytes” paradigm means that Linux has been prevented from even developing a decent clipboard. It now lags 29 years behind the original Mac (30 years behind the Apple Lisa, which had it before the Mac did) in a basic capability which end users actually appreciate having. It even is starting to lag seriously behind in server administration, because Windows has an object-oriented command line (PowerShell) which runs circles around the traditional Unix shells in the hands of anyone who bothers to learn how to use it.

  34. The Vicar Mar 12th 2013 at 11:20 pm 34

    Oh, and further: as is typical for Joel, he doesn’t know about anything except Microsoft’s way of doing things. Sorry, Mark (and Joel), but Macs had character encoding support worked out in a reliable, sane, OS-level way long, long before Unicode came along. (System 5, if I recall Inside Macintosh correctly, which would be sometime around 1987.) If Mac users had been able to avoid having to deal with DOS/Windows and Unix/Linux, then they would have had no problems whatsoever (other than, of course, the fact that the fonts to actually display foreign characters weren’t included — the text itself would have been fine). And the Internet Explorer method of auto-detecting the encoding of a web page wasn’t “interesting”, it was stupid.

  35. Dave in Boston Mar 13th 2013 at 03:08 am 35

    Yes, my source quite likely was wrong. It was, as you surmise, a Linux developer; or actually I think he was from freedesktop.org, but that’s effectively the same thing. (Do remember, though, that Linux is neither the beginning nor the end of the open source world.)

    System 5 did not, to the best of my knowledge, handle CJK text. That only came later. Dealing with characters that do not fit in bytes is by far the largest part of the problem, at least at the level we’re talking/complaining about. This is what creates problems with Unicode and Linux. (And, Mark in Boston: because Unicode code points do not fit in bytes, “using Unicode” in practice means “using UTF-8″, and the problem with UTF-8 is that not all sequences of bytes are legal UTF-8. Since Unix (not just Linux) has always been based on streams of bytes, tools that operate on streams of bytes have no way of knowing if those bytes are supposed to be text or not; they can try to treat them as text, in which case operations on non-text, such as images, audio, etc., fail and/or lose data, or they can ignore the UTF-8 and treat them as bytes, in which case text operations like sorting do not work properly. This is a deep-seated problem and no Unix or Linux vendor, including Apple, has ever done more than paper over it, with predictable long-term results.)

    Now, going on to pasting things… I hate to tell you this but those objects (even in MacOS Classic) are also just streams of bytes. The reason it doesn’t work except between pieces of software that have been explicitly made to work together… is that when it works, the reason it works is that both sides have agreed on formats and labels so that those streams of bytes can be interpreted properly. This has been (and remains) problematic anywhere and everywhere there hasn’t been a single hegemon to declare and enforce a single standard. Try on Windows pasting objects between programs that aren’t integrated into the same larger package… oh wait, there aren’t any of those any more.

    If they weren’t streams of bytes, you wouldn’t be able to transport them at all, you know. This is one of the most basic things Apple got wrong in MacOS Classic and the only reason “if Mac users had been able eto avoid having to deal with…” is an issue at all.

  36. Stan Mar 13th 2013 at 03:51 am 36

    @ Vicar 13, How the he_l do I know what you do for a living? There are any number of jobs that require employees to work throughout the night. I was simply expressing concern for your well-being, man. I don’t care anymore, though. Rant about minutiae of computer systems until your little heart’s content.

  37. Proginoskes Mar 13th 2013 at 03:59 am 37

    @Vicar (#13): Yes, it’s an apostrophe, but it’s pronounced as “prime” in a mathematical context, which is what the comic is in. I guess you never made it to Calculus.

    @ James Pollock (#27): yes, r” doesn’t mean “r inches” in algebra or geometry, but the mixture of ” meaning “inches” and the misuse of notation is what the joke is trying to convey here.

    Just like my (AFAIK) proof that -1 = 1, by confusion of notation:

    1 = abs(-1) = |-1| = det([-1]) = -1.

    (For those of you who don’t remember/know, abs is the absolute value function, and det is the determinant of a matrix, both of which use vertical lines.)

  38. MollyJ Mar 13th 2013 at 06:16 am 38

    I’m posting from work. Look at my timestamp. Been here all night.

  39. The Vicar Mar 13th 2013 at 03:04 pm 39

    @Dave in Boston:

    Your source wasn’t “quite likely” wrong, he was wrong. I already gave you a link to the standard itself, straight from the source, and that code point is a printing glyph.

    I’m fairly sure that System 5 did indeed handle CJK text; IIRC it was one of the features covered in the volume of Inside Macintosh which accompanied the Mac II, which was the one which came out at the same time as System 5. (The difficulty was a lack of fonts, not a lack of the ability to handle the text.) It was certainly something released before System 7, and something which was actually on the cover of the relevant IM volume, so it’s either 5 or 6. (Which means either 1987 or 1988.)

    Yes, of course there are graphics and word processing programs on Windows not made by the same vendor as part of the same package. Are you trying to get things wrong? Quite aside from the fact that every halfway decent Open Source program for end users can run on Windows as well (which is, by the way, a mild argument in favor of Windows over Linux), the most popular graphics package in the world and the most popular word processor in the world are from two different companies: Adobe Photoshop and Microsoft Word. (For that matter, even Microsoft’s own graphics program, Paint — or whatever they’re calling it now; IIRC the name was changed slightly — is not part of a suite at all.)

    And yes, technically structured data is a stream of bytes inside the computer. That’s totally irrelevant. That’s just as disingenuous an argument as the people who say “technically, everything is chemicals, so you can’t object to chemicals in food”. Dur hur hur. Linux is trapped not by the fact that the data is a stream of bytes but by the fact that every program is expected to always accept any stream of bytes as input. There isn’t even a MIME type associated with data in the clipboard to alert the program accessing it to what it’s actually coping with. A real GUI OS — which is to say “an OS which isn’t designed by open-source people” — actually enforces (gasp!) context in the clipboard, and provides an API to actually handle display of the more common types. Since Linux doesn’t even guarantee that there will be any API present at any time or of any type (the kernel team even deliberately — and yes, it is deliberate, gregkh has confirmed this — changes the kernel ABI to break things on purpose), there’s no chance that any Linux GUI will ever actually resolve this problem. Open source people just think about the whole concept of GUIs the wrong way.

    @Proginoskes:

    No, they are not apostrophes. Go and look at an actual printed mathematics work, and you’ll see that the “prime” symbol is, and pretty much always has been, a separate piece of punctuation, even though it looks similar enough to an apostrophe that people may use apostrophes instead for convenience. That doesn’t mean an apostrophe is the same as a “prime” symbol any more than it means two hyphens in a row is an em dash.

  40. Mark in Boston Mar 13th 2013 at 06:26 pm 40

    I’ve been around for a long time, and I remember computers that did not have bytes. The GE-225 for example had an 18-bit word and was word-addressable. The character encoding used 6 bits per character allowing you to stuff 3 characters into a word.

    Every computer had its own way of doing everything. Just about the only thing the early digital computers they had in common with each other and with modern digital computers (not counting non-digital computers, and there were plenty of those) was the use of two-state storage elements, meaning that fundamentally the data was stored in binary form. And many of them were binary-coded decimal instead of straight binary.

    NOW we use a standard that groups bits into groups of 8 (not 6 or 12 or 18) and call them “bytes”. But don’t think that is the only way to do things, or the only way things were ever done.

    NOW we line up the bits in memory, or on disk, in such a way that they can be seen as a one-dimensional line, and we number the first group 0, the second group 1, and so on, giving each group an “address”. So it’s natural to think that this is the only way to do it. But not all computers did THAT.

    So don’t make the mistake of thinking that some things, like streams of bytes, are the natural, primitive, ways of doing things for which no standard was ever needed.

    if I applied The Vicar’s logic to music I would say this:

    “The piano is nature’s perfect musical instrument. I can sit at any piano and play any piece of piano music, no matter who composed it, because the piano was the same for Mozart and Beethoven as it is now. I know there will be 88 keys. Every piece fits.

    “People have tried to get me to use pipe organs. What a mess! There are 61 keys. There are multiple keyboards! What, one wasn’t enough? There’s a keyboard you’re supposed to play with your feet. And to make matters worse, every pipe organ has a different set of stops! And not one has a damper pedal! You can’t play a Beethoven sonata on a pipe organ! If only organ builders would learn from piano makers how to design a musical instrument.”

  41. The Vicar Mar 14th 2013 at 12:12 am 41

    @Mark in Boston:

    You have it exactly backwards; in music a Linux user is one who would sit and say “the piano is the perfect instrument; I can string up new keys which will behave exactly like the old ones but with lower or higher tones, and therefore I can play anything at all”. It takes a Mac or Windows user to say “if everything is a series of piano notes, then there’s not much point in having more than one musician. I want a brass quartet, a violin trio, a symphony. We need more structure and more instruments.” So all the actual composers and talented musicians end up learning multiple other instruments, and treat a piano as a makeshift for when they can’t get something better.

    And centuries later, the open source world would be selling fifteen-foot wide pianos, with extraordinarily awkward knee-driven attachments to play an extra two notes in a vain effort to compete with the notion of having multiple other musicians working at once, and claiming that you don’t need to use other instruments because they can’t play any sequence of notes whatsoever and are therefore inferior, or because they involve paying money to that vampiric old Stradivarius, or because everyone should be able to retune their own instrument (even though 99% of musicians actively prefer to pay someone else to do that).

    Here’s another fun experiment to try at home: name a successful open-source project for end users which is not a copy or a descendant of a pre-existing closed-source program. Good luck!

    (Oh, it just struck me that you may have interpreted my claim that the Mac had superior and multibyte text handling to DOS/Windows and Unix/Linux before Unicode to mean “Macs should never have had to change to Unicode”. If so, you really need to work on your reading comprehension. Then again, if you work with Linux/Open Source, you probably not only constantly encounter that sort of stupidity but quite possibly have it yourself, so I guess that’s forgivable.)

  42. Dave in Boston Mar 14th 2013 at 03:27 am 42

    Yes, it is quite likely that my source was wrong. The other possibility is that I have misremembered or misreported what he said. However, if you’re going to continue to slag off all open source developers because freedesktop.org includes some meatheads, there’s not much point in prolonging this discussion.

    Mark: the idea that files can be nothing more complicated than streams of bytes was one of Unix’s big contributions to OS design. Before that it was (and in some dusty corners still is) a madhouse.

  43. feuerstein Mar 25th 2013 at 04:18 pm 43

    haha, fj @23, as if anyone other than a c programmer knows what you mean by overloading. : h

  44. feuerstein Mar 25th 2013 at 04:36 pm 44

    withering heights @28: yes! agree!

  45. feuerstein Mar 25th 2013 at 05:05 pm 45

    ya kniw, everything can be seen from various viewpoints. and computer data can always be seen as a steam of bytes. and i do mean all data. i come from the times of various manufacturers, and. to make an effort to see every set of data from a common viewpoint, is such a rare thing. yes, evry single set of data can be viewed as a stream of bytes. as it can be viewed as a set of blocks. or as a bunch of sentences. the art is, to be able to program as if it were ANY of those things. because, to the computer, it is so totally irelevant, how the human views it.

    it is the programmer’s (that was an apostrophe) responsibilty, to preaent the data in a form that is meaningful to the user, regardless of the form in which it is stored on the … device. whether it is a hard drive, a diskette, or a vibrator. vicar might see that. people who don’t program, have difficulties. and bad programmers point the finger at the medium or the user.

  46. feuerstein Mar 25th 2013 at 11:25 pm 46

    ok, and a post for typsetting natsis.

    i use three spaces after every period. even if they don’t appear in the post, they make it easier to proofread. and even after proofreading, i miss errors. anyway, all typos are the fault of autocorrect, so why do i bother? yeah, because it looks better.

    same reason people insist on using different fonts in their mails. i recieve everything as text, so i could say it’s (apostrophe) a waste of time. but i don’t complain. they set their font so it’s (apostrophe) easier or prettier for them.

    by the way, i started using 3 spaces after all punctuation after the punctuation natsis here had a rant some time ago ago how dumb it is to use two spaces. so i decided to use three, hoping the software only turned two spaces into one space.

    it doesn’t. but typing the spaces is automatic, and looks so much better when i submit the text.

    : )

    (that was a colon, a space, and a right-parenthesis. i use the space, because i never know when software changes it into a pisture, and i still haven’t had software correct the extra space. advantage is here, also, that when i need text followed by a parenthesis, the space is so automatic, that i end up thinking about how the computer might translate my typing.

    for several people here, : ) becomes :) and turns into a litle picture.

    heehee.

    kilby’s point about punctuation is almost crazy. germans have trouble with apostrophes in english, partly because in german, possesive has an s, without an apostrophe, so they learn to add an apostrophe, but add it in places it doesn’t belong. in german, typing all lower case is difficult to read because many words are disambiguated by the first letter being a capital. yes, it is clear from context, but reading is easier when the disambiguation occurs without having to consider context.

    all of which is what the cartoon are alluding to.

    : )

    yes, i know. old post. i don’t really care. i always look at page two, so that i’m not tempted to say something even more stupid.

    : )

  47. The Vicar Mar 26th 2013 at 02:38 am 47

    @Dave in Boston, etc.:

    “Everything in the computer is a series of bytes” is true, but also a pointless observation. The closest analogy is saying “all matter is composed of atoms”. This is true. At one time, it was a revolutionary discovery. But as a matter of coping with the world on a daily basis, it gets you nowhere.

    When you’re designing, say, refrigerators, you don’t sit down and say “refrigerators store matter, so I’d better design my refrigerator to safely store every form of matter; I’ll need a layer of strong basic material in the bottom in case they pour in fluoroantimonic acid without a container, plus a layer of metal because they might put in acetone and destroy plastic, but I also need to coat it with metal because they might drop in a huge spiked ball that would puncture the plastic…”. You sensibly say “refrigerators are places to store food; people do not pour liquids in without containers, or store huge spiked balls, or other things like that.”

    In the same way, it is tremendously foolish to say “everything in the computer is a sequence of bytes, so every program had better be able to handle — as any form of input whatsoever — any arbitrary sequence of bytes” and then proceed to error check everything. What that leads to is a massive amount of reinventing the wheel as every programmer solves these same problems independently — many of them getting it wrong-but-not-wrong-enough-to-make-it-immediately-obvious (which is the state of e-mail and non-ASCII text handling), or technically right but ludicrously inefficient, or right but such a brittle implementation that the program will choke fatally if the user does something even slightly wrong. (And when the thing being reinvented is the interface, then reinventing the wheel becomes almost an act of sabotage against the user, who has to un-learn things they already knew in order to learn the program’s nonstandard way of doing things!)

    If that metaphor isn’t technical enough for you, try this: when you build a program which accesses the network, you don’t start at level 3 of the OSI model every time, write your own implementation of IP, then your own version of TCP on top of it, and eventually work your way up to higher-level stuff like e-mail messages. For one thing, you’d introduce all sorts of bugs which would make your program fail to work with other programs. For another, by using an existing implementation which is built into the OS, you permit speed increases and bug fixes made in one piece of code to benefit every program which performs that function.

    This stuff is obvious, it’s common sense.

    The reason, or at least one reason, the reason which matters in this context, that I’m so against Unix/Linux — and open source in general — is that even though this stuff is obvious and common sense (nobody bothers to implement their own vector class in C++ any more, for example, because the template library gives a really good one to you), and open source programmers will generally admit it, they constantly violate these ideas in practice.

    As mentioned, environments (both GUI and just general API) for Unix-y systems (other than Mac OS X, which is pretty distinct from Linux and open-source BSD) don’t standardize on things, so that every programmer has to start from scratch and the user has to constantly work with programs written at cross-purposes by people with totally different intent. Pretty much any task a programmer might reasonably expect to find a library to handle will have two or more competing libraries in the repository — both partially broken and both with totally different requirements and expectations. By never enforcing that the system will provide an interface other than “we’ll hand you a stream of bytes, or the path to a file which you will have to read in as a stream of bytes”, the open source world enforces that every program has to start off from a stream of bytes, while the proprietary software world enables programmers to start off with a bunch of optimized, debugged, reliable systems with useful structure, which tend to get faster and better over time while the Linux ones are pretty stagnant. The problem is both ancient and endemic (both X11 and Wayland are both incredibly bad graphics systems compared with those in Mac OS X and Windows, or even BeOS — and BeOS has been dead for years! — and Linux audio is a mass of chaos, with reinventions of the wheel in the form of something like 16 different audio APIs, which are all semi-broken and mostly dependent on each other, strewn all over the place because the kernel refuses to provide real audio services like any self-respecting OS does). Desktop Linux is only marginally less broken now than it was in 2000, and chances are that in another 13 years it will only be marginally less broken than it is now.

    Just to forestall the inevitable (as proved by Mark in Boston above) post saying that, because I think Mac OS X and Windows handle these problems in a vastly superior way to the Unix world, I must think that they are perfect: of course not. Apple, for example, is notorious for deprecating APIs and whole GUI concepts after a few OS revisions, just as they’re catching on. (Remember “balloon help” back in System 7? Great idea, mediocre technical implementation, lousy developer tools. Instead of improving the system — which could have been done quite easily — they just jettisoned it entirely.) And as for Microsoft, they can’t even decide if your PC is a tablet or not, and do weird things like define a cursor position when the device is a touchscreen tablet with no mouse. But on either platform, if I want I can write a low-level program, as on Linux, which will treat all input as a stream of bytes, but I can also not treat everything as a stream of bits and bytes and rely on the OS to provide services to parse things for me and provide neatly-packed, pre-validated structures and reliable services — I think it’s actually possible, with Apple’s latest developer tools, to create a GUI-compliant arbitrary-image-file-displaying program without even having to write a line of actual code — and the open-source world never, ever, ever does that.

  48. feuerstein Mar 27th 2013 at 09:58 am 48

    vicar, re my post at 45, being able to visualize the data is different from writing a program. a programmer who can visualise what’s actually in the data, and can get it arranged in a way that is useful, is what’s important. i can’t suggest writing your own operating. i don’t like any of the current operating systems, because they like to make me use a wrench to remove a nail. so i switch platforms regularly to do things i need to do.

    i’m not saying one platform should do everything.

    ms is basically a toy. ios is a toy, so everything it runs on comes across as a toy. it’s not a huge problem. i can switch toys easily. but for doing some things, the toys don’t let me do it, even though the computer underneath excels at what i need.

    so i have to have several larger toys, and make each one do what i want.

    the first computers could view all data as streams of bytes. yes, it is a trivial idea, but imagine trying to build your fridge without atoms. tricky.

    saying no one ever needs to think of matter as atoms is incorrect. nuclear physicists are pretty much good at atoms. and they use the fridge.

    and yes, i work with software at the hardware level. on one of my windows machines, in fact a laptop that is 17 years old and still does what i need it to, i had to write a stupid keyboard interrupt to make it the debugger work correctly.

    ok, i could have spent 400 on a debugger. but the one i was used to, well, i was used to it. and a couple hours later, i could still use the debugger on w2k.

    i also have and ipad, which is useless, except for surfing, and even there it doesn’t like treating web pages properly. i use it to read news on the train, and to do email. and it can’t even get email right. jeez.

    anyway, every data can be treated as whatever you want. the trick is getting a toy that lets you display the pictures properly, or turn off the stinking pictures and videos when all you want is text. i could turn off the ads in a sec, if i cared to rip the ipad’s backup files apart and store a decent brwser that ignored anything i wanted it to ignore. the programming is easy. ripping apart the backup format of a toy is almost degrading.

    but the underlying hardware on this thing is very nice. i’m not allowed to use it, so it may as well be that greeting card with the little lead-filled battery, playing a cute tune, and thrown in the landfill, illegal or not.

    heh, five years ago i bought some hardware. in total 16 3ghz processors in smc, with 1024 gb storage. memory, not hard disk.

    user, old and unwanted, and the cases and processors and memory from various places. once configured, it would have had a new price 13 years ago of over 100,000. i paid around 7,000. and it does what i want, and where i want it. and try to find an os that can actually use the memory, haha.

    i don’t care that noone else can use it. i care that some simple changes in the programming practices of common platforms would let me use common workstations without having to dig into the software. AND let me use my programs without changes on future workstations.

    don’t bother arguing with me about nuclear physics (ie what hardware and software CAN do). the hardware can do what i and many of my peers need. the os could if it cared to, and with very minor policy changes at the source.

    use your atoms to make your toys, but let me use my atoms to make my toys.

    boh, too long; won’t read.

Comments RSS

Leave a Reply