I’m not processing this one…

Cidu Bill on Jun 14th 2013

cidu-pardon-reportcadr.gif

He draws nice pictures but he’s a little slow??

Filed in Bill Bickel, CIDU, Pardon My Planet, Vic Lee, comic strips, comics, humor | 26 responses so far

26 Responses to “I’m not processing this one…”

  1. New-ish Guy Jun 14th 2013 at 12:18 pm 1

    Pretty much how I took it. He dresses nicer (or takes baths) and is more polite than before, but he’s still dumb as a rock.

  2. Elyrest Jun 14th 2013 at 12:23 pm 2

    It sounds like a review of a new computer. Looks good, easy to use, but slow as last year’s molasses.

  3. Scott Jun 14th 2013 at 12:24 pm 3

    And also gets along well with others. But he didn’t really learn anything. So, yeah, you got it.

  4. Jerry Jun 14th 2013 at 12:30 pm 4

    He is physically maturing (improved graphics), gets along well with others (greater ease of use) but failed the school year and will have to repeat whatever grade he is in.

  5. billybob Jun 14th 2013 at 01:16 pm 5

    except processor speeds aren’t increasing. From 1979 to 2003 speed doubled every 2½ years (4.77 MHz to 3 GHz), but a new quad core 2.5 GHz has the same processor speed as one 5 years old (those better informed than me, please correct me if I’m wrong).

  6. James Pollock Jun 14th 2013 at 01:43 pm 6

    Billybob, it depends on how you define “faster”. The clock speed is NOT a reliable indicator of processing power. The internal architecture makes a difference, as does the optimization to specific task and, significantly for current processors and computer design, the number of processor cores in play.
    Back in the Pentium days (the first Pentiums) Intel’s competitors released processors that, because they used larger caches and other performance optimizations, operated faster than an Intel processor at the same speed. To emphasize this, they marketed the chips by numbering them according to the Pentium chip they were roughly equivalent to, rather than by actual clock speed, and referring to a “Pentium Rating” (yes, “PR”!) that was the equivalent Pentium clock speed.
    If you just want to stay within Intel’s processors, you can buy a Pentium, Celeron, and Xeon processor that operate at the same clock speed… but their performance will not be equivalent.
    Processor POWER or CAPABILITY has continued to increase at the rates we’ve grown accustomed to, even if the clock speeds topped out at under 4GHz.

  7. DPWally Jun 14th 2013 at 02:25 pm 7

    He gets along well with others, but he’s being left back. He’ll have to repeat 2012. I’m not sure how graphics fits into that.

  8. furrykef Jun 14th 2013 at 07:21 pm 8

    James Pollock is quite right — today’s processors run circles around processors from five years ago, even though the clock speeds may be around the same. But I think one year probably isn’t gonna make a huge difference in processor speed.

  9. Mark in Boston Jun 14th 2013 at 10:46 pm 9

    The processors can do more things during each clock tick, and the chip has multiple processors so it can do several things at once during each clock tick.
    And yet Internet Explorer takes longer to start up than ever before.

  10. James Pollock Jun 15th 2013 at 07:42 am 10

    Back in the good ol’ days, we were able to do plenty of computing at 1 MHz. You kids today are spoiled. Spoiled I say.

  11. The Bad Seed Jun 15th 2013 at 01:23 pm 11

    I’m a geezer like you, James Pollock… I remember when 386’s were a big deal, when a Pentium had me over the moon, and how exciting it was when you could buy a PC off the shelf with over a GB of hard drive. I was in high school when our ritzy school district got its first PCs - for the Academically Talented kids only - and they were 6 Radio Shack TRS-80’s! My iPhone can barely constrain itself from bursting out laughing as I type this…

  12. James Schend Jun 15th 2013 at 03:45 pm 12

    @billybob: those processes were running 2 or 4 hardware threads, max. (A dual-core P4 with HyperThreading has 4 hardware threads. And that was top-top-top of the line a few years back.) The CPU in my relatively-cheap desktop now is running 16 hardware threads. It’s doing (up to) 4 times more work each clock cycle.

    So while it’s true that “processor speeds” (measured in hertz) haven’t increased, the amount of work processors accomplish has increased many times over.

  13. Dave in Boston Jun 15th 2013 at 10:46 pm 13

    Up to, yeah. Mostly those extra cores are just sitting there doing nothing, because writing parallel software is difficult and expensive. In practice, the Moore’s “law” party is over.

  14. James Pollock Jun 16th 2013 at 01:05 am 14

    I have no doubts that multicore optimization will be inserted into compiler design so that programmers don’t actually have to worry about it, just like they don’t have to worry about optimizing for register performance now. We’ll just hide all the details with quadruple indirection in the programming language, and let the compiler handle it.
    If nothing else, the OS will make use of the extra cores.

  15. Dave in Boston Jun 16th 2013 at 03:51 pm 15

    Unfortunately, lots of smart people spent a lot of time working on parallelizing compilers, mostly in the 80s and 90s, and pretty much failed. There’s not likely to be any help from that quarter. As for the OS, it doesn’t do enough work to keep even one core busy.

  16. James Pollock Jun 16th 2013 at 06:13 pm 16

    The nice folks at Sequent (now part of IBM) would disagree with you. (They make massively-parallel supercomputers)

  17. Dave in Boston Jun 17th 2013 at 02:49 am 17

    Your point? Lots of people (well, not *that* many) make parallel supercomputers. Writing parallel software is still difficult and expensive.

  18. Kilby Jun 17th 2013 at 05:18 am 18

    In addition to Dave’s objection (@13) about the lack of parallelization, increasing hardware power has not been matched any meaningful improvement in software intelligence or reliability. The result of four-fold acceleration (such as James mentioned @12) is that computers are now able to make four stupid decisions in the same amount of time that they were previously able to make only one.

  19. James Pollock Jun 17th 2013 at 01:32 pm 19

    Dave, they would object to the “mostly those extra cores do nothing” part of your statement, not the “writing parallel software is hard” part.

  20. Mark in Boston Jun 17th 2013 at 05:36 pm 20

    No, writing parallel software is easy if you do it right. The entire World Wide Web is parallel software. It works because it’s all shared-nothing communicating sequential processes. Read C.A.R.’s book “Communicating Sequential Processes.” Erlang is one of the best programming languages for writing a shared-nothing parallel system.

  21. fj Jun 18th 2013 at 10:47 am 21

    James Schend mentioned that his processor had 8 cores and 16 threads, so it did four times as much per clock cycle as a Pentium D (P4s were single core processors, so I assume he was referring to a Pentium D). Actually, it does MUCH more than that. For example, a Pentium D 945 ran at 3.4 GHz, and could manage a Passmark score of 770. In contrast, an i7 2600K (Sandy Bridge, like the currently available 8-core Xeons that Mr. Schend is likely using) also running at 3.4 GHz can post a Passmark score of 8,400. It has double the cores, but offers more than 10x the performance. So that 8-core Xeon is doing roughly 20 times as much per clock cycle as a Pentium D.

    That highlights the degree to which parallelism on a much smaller scale has improved the performance of what are, at least logically speaking, single-threaded execution paths. Of the five orders of magnitude in single core performance we’ve enjoyed over the last three decades or so, we can attribute about three orders of magnitude to faster processor speeds. The remaining two orders of magnitude have come from improvements in pipelining and parallel resource utilization that allow a given processor core to effectively execute multiple instructions at once. A VAX 780 could only manage to execute one instruction every 10 clock ticks or so (500,000 Mips on a 5 MHz clock). The current crop of i7 quad processors are approaching 40 instructions per cycle. Now to be fair, those processors have four cores: but that still means each core is hitting close to 10 instructions per cycle.

    How can each core execute 10 instructions per clock tick? A combination of great pipelining design on the hardware level, and great compiler design that changes the order of instructions to allow maximum usage of computing resources by those pipelines. And for the small bits of parallelization that would be achievable for the average algorithm (one that is really a serial design), this approach is probably better than attempting to decompose the original stream into separate threads (as it avoids the overhead associated with even a lightweight context switch). In fact, the latest processors are so good at pipelining that hyperthreading doesn’t really add that much to performance.

    And let us not forget that any decent graphics card has enough parallel processing power to run rings around a Cray II.

    As far as avoiding the complexity of parallel programming with a share nothing architecture, that’s a great approach for the class of problems for which it works. However, there remain many problems for which it is not a reasonable solution. Just because you can avoid sharing issues by locking a pair of toddlers in separate rooms and giving them each a complete set of toys does not mean that it is an appropriate solution to the root problem.

  22. Meryl A Jun 20th 2013 at 01:27 am 22

    You want to talk computer geezer -

    In high school (1971) I learned to program in FORTRAN on a main frame. The computer was the size of an office desk. The compiler was hardware (not software as it is now) and was also the size of a desk. One would punch holes in computer cards (which is the size of a late 19th century dollar bill), feed the stack of cards into the compiler, get another stack of cards from same which were then feed into the computer. Invariably one did something wrong and had to change a card (or cards), refeed into the compiler… (Although one could punch holes in the card to spell one’s name when looked through for fun.)

    In college I learned to program in basic and we sat at a keyboard, entered the info into the computer which was somewhere where we never saw it, and it output onto a paper tape. Games like the popular Star Trek one, would be a success if the computer printed “Boom” on the tape. No screen, no graphics.

    (Somewhere in this period my dad got a calculator, it was about a foot by 9 inches and was a wonder and an advancement in accounting. It added, subtracted, multiplied and divided. The latter of which was extremely hard to do with an adding machine.)

    Husband bought an Atari 400 computer in the late 1970’s (before we were married). It had a black and green monitor and a cassette tape drive.

    He then progressed to a Commodore 64. It had a black and orange monitor and a 5.25 inch floppy drive.

    Then came his “IBM compatible” 286 Epson computer. It had a black and white monitor and a hard drive, the wonder that made it fast. This was followed by a 386, 486, pentium with the addition of a 3.5 drive along the way. Then newer and faster ones. Strangely almost all of them are still here. (Atari will be worth a fortune as a collectible I am told.) We have 5 laptops (all but one is mine).

    Currently we use my first new computer (when one of us needed an upgrade, I took his and he got a new one before), his 2 computers, and 3 laptops that are being used. I am currently on the laptop middle in age of the ones being used. I also have my last computer in the office as it is had not made it to the basement in 2 years. One laptop was switched to TV as a monitor to see the Emmys the year ABC had a fight with Cablevision and is has never been switched back, so it is not in use.

    The first time husband came home and found me running 3 computers at once he got upset (and all were desktops).

  23. Kilby Jun 20th 2013 at 05:55 am 23

    @ Meryl A (22) - For me, learning Fortran in high school turned out to be a dead end road, because when I went to college, it led me to make the mistake of choosing a “Vax” account for my computing privileges, rather than Unix (which would have meant that I would have needed to learn a new language (namely “C”), in addition to an editor (vi) that I hated then, and still dislike now. However, in retrospect, C would have been vastly more useful.

    P.S. I have three machines running under and on my desk right at this moment, plus a forth (currently off) in another room. Each one has a different operating system, and thus a separate justification for its existence.

    P.P.S. My high school chemistry teacher once told us that in the early 1970’s (before calculators), he considered purchasing a device that could calculate a square root (but nothing else). He decided to wait, because the purchase price was $3000. Just a decade later, a programmable calculator could easily be had for less than $100.

  24. James Pollock Jun 22nd 2013 at 08:36 pm 24

    Count your blessings, Kilby… I was condemned to Pascal. I hated Pascal so much it pushed me out of computer science and into the liberal arts. (Actually, the real problem was the IS/IT hadn’t been invented as a major yet.) Years later, when I did finally earn a technical degree, I fulfilled the language requirement with a sequence in assembly language. At one point I could read uncommented code in ForTran 77, Cobol, C, BASIC, and x86 assembly language, and I also had passing familiarity with RPG II. Then the whole computer industry abandoned functional languages in favor of object-oriented ones.

  25. Dave in Boston Jun 23rd 2013 at 02:35 pm 25

    I think you’re looking for a different word; “functional” languages means Scheme, ML, Haskell, Lisp… even though these are not very functional in the pragmatic sense of the word.

    Anyhow, languages based on CSP or Milner’s pi calculus are a fine thing (although I don’t much like Erlang as a language) but rewriting the world in these languages is, guess what: difficult and expensive.

    Meanwhile, characterizing the whole web as a single piece of parallel software is rather silly, inasmuch as the whole web taken as a single unit doesn’t actually *do* much of anything in particular. Subsections of it do particular things, some of them useful, but those subsections are not the utopian distributed systems you’re describing.

  26. feuerstein Jun 28th 2013 at 03:58 pm 26

    haha. pascal, fortran, basic, cobol. it doesn’t really matter as much as having a Operating System. Microsoft sells a virus. Operating Systems can manage multiple cpus, multiple users, many hardwares. Real Operating Systems have been around since the sixties, and I laugh whenever Microsoft boast about “new” capabilities that I’ve been working with for over thirty years.

    I mean, who cares what language you progrem in? If the underlying system can’t manage your needs and your hardware, it doesn’t matter what language you use.

Comments RSS

Leave a Reply