A couple of unrelated technology things.
First Item: the iPhone vs. “non-Apple smartphone” contrast just got a little more interesting, with Tim Bray joining Google with his main job being to promote Android. He has an inaugural essay here which is very much wroth a read.
The most interesting thing about the essay that is overt is the comparison between the iPhone platform and the Android platform with respect to “freedom.” The most interesting thing that is not overt or obvious is that Tim Bray’s name does not appear anywhere on the page. That is a style and formatting goof, but a very funny one.
Second item: A follow up comment on my highly contentious post on Basic. It was suggested by a commenter that Phython is the new basic. Fine. So, I want to do numerical calculations. I have a choice between two languages: Some primitive form of Basic and Python 2.x. Which do I chose?
Basic. Why? Because Basic can fucking do division, dudes. Like 2/3. Python can’t. Well, you can imort a library module that will do division. IANI. Basic wins.
Third Item: Here’s a study on how students use Wikipedia.
Fourth item: The funniest thing I saw on the intertubes today.
2.0 / 3.0
Just have to use floating point.
Bray has an “author” link on that page. He is just frugal. 🙂
What on earth are you on about? Python 2.x has division.
http://docs.python.org/reference/expressions.html
“Technical Note: We originally did this in word, but converted it to LaTex to make it look more scientific.”
The Swede … OK, you just keep thinking that. Now, however, we know that you are a fake. Or at least don’t know when you are out of your depth.
For the rest of you, in Python 2.x without the imported library, 2/3 is not what you think it is!
New England Bob: Not in basic. Which is my point. Python 2.x is a language that you have to fool into doing a very basic bit of arithmetic. For a person who programs almost exclusively for the purpose of number crunching, that makes Python a joke, and Basic the superior langauge! When looking at basic arithmetic.
I wouldn’t trade my Android phone for an iPhone, even if you paid me. I love the fact that you don’t have to get your apps through google, although they have a good apps store. Unlike the iPod store most apps are free (I think the stat is somewhere around 75% free 25% paid for Andy and almsot the exact opposite for iPhone) You can directly drop packages on the phone and install.
From a computer engineering perspective, “x == (x/y)*y + (x%y)” makes perfect sense. If you really need a floating point result from the division of two integers, do the cast, but you should probably just be using floats for your operands to begin with.
Not that I really care one way or another. What little programming I do these days I do in perl, and for no other reason than that’s what I’m told to use.
$ echo “print 2 // 3” | python2.6
0
Looks correct to me. Proper integer division.
$ echo “print 2.0 / 3.0” | python2.6
0.666666666667
And proper float division too.
No real clue what you’re trolling about here. I guess that it has to do with Python rounding towards minus infinity instead of 0. Which isn’t a question about being able to do division, but convention.
Even though I have no real love for Python, what you’re doing is completely uncalled for.
2/3 in Python is exactly what I think it is; I’m asking for the integer result of a division between two integers, and that is precisely what I get. This shouldn’t be that hard to grasp for an expert such as yourself, if even I comprehend it. It does take an understanding of the paradigm Python was designed in to be obvious though, so I can see why you don’t grok it.
In Python 3.x this has been changed, mostly because a lot of people are not used to getting exactly what they ask for, and if you really need that crutch, you can import it in Python 2.x as well. I never do, because I can keep that little fact in my head.
If you mostly do number crunching, there are much better tools. SCIAG, FORTRAN and FORTH are good contestants, or you could go all the way and use Matlab (or one of the clones) or Maple, depending on your need. But if you want to use Python, Numerical Python is the way to go. It’s ridiculously powerful. NASA use it, for example. It blows pretty much everything else I’ve tried out of the water.
I know what you mean. BASIC has served me well over the years. It was easy to put together a UPC check digit calculator in BASIC though the same could be done in C too.
On my first exposure to Python I was all “What do you mean it doesn’t do division?”
I assure you that what I write on my blog is never, ever, under any circumstances either called for or uncalled for. The idea that you have an opinion about the calledness of what I write here is … shocking.
@11: Oh, look, you’re evading the topic. Smooth.
Tony P: Word.
I will admit that you have a point though; I am out of my depth. I’ve used Python for many years, and it’s always been self evident to me that objects provide results in their own type to me. I was giving you way too much benefit of a doubt, and actually thought you had merely missed something. Not that you had been so damaged by thinking in BASIC for so long that the whole paradigm whooshed over your head and you sincerely believed this to be a flaw of Python.
That’s such a limited view of programming that I can’t even begin to relate to it. It’s really amazing how limited a view of how to approach programming concepts some people have; it blows my mind!
Actually, I haven’t been thinking in BASIC at all. But I accept your dressing down for what it is. And later, when you realize what it is, let me know and I’ll delete it for you to minimize the embarrassment you will surely suffer.
BASIC, imperative, call it what you like. You constantly betray your lock-in to it with the way you phrase answers, like stating VB.NET has “a lambda function”, betraying your lack of comprehension of what it actually is, and not managing to internalize why 2/3 makes zero in Python and actually USE that knowledge instead of flailing away at what you just can’t grok.
It’s rather sad, really. Like watching a creationist keep ignoring the arguments he doesn’t even understand are arguments, and ad nauseam repeating what he KNOWS is true.
Fucking division trolls?
“Ain’t a day goes by I don’t say, ‘Shit, ya. I ain’t never seen that before.'”
If you just want number crunching, look into APL. I bet it does division. 8^)
“like stating VB.NET has “a lambda function”… I don’t believe that I said THAT.
Swede, look, I acknowledge that you are an expert.
But you are also a moron and a closed minded sloth. You could be making some pretty valid arguments about what I’m actually SAYING if you tried, if you understood it. But writing off the argument because you feel I’m not qualified to make it only makes you look like … well, the slothy moron guy.
Stephanie: A Floor Division Concern Troll to be exact! FDCT. Pronounced fuck-dicked.
Actually, I did make some pretty valid arguments. That’s what I started with. You apparently didn’t even understand they were arguments from your reaction to them.
I understand what you’re saying just fine; that you don’t comprehend the structure of Python well enough to make proper use of the language, but instead remain locked in the view that it should behave like BASIC, which is what you are used to. That’s fine, I don’t care an iota whether you change tools, use different paradigms or anything like that. What works for you is what you should use.
But when you from the observation that BASIC works fine from you extrapolate the hard stance that therefore all languages are like BASIC just with added “features”, and blatantly ignore arguments to the contrary – and even worse, consider your ignoring arguments a complete refutal of them – I take offense. Much like you might if I were to state “if we came from monkeys, why are there still monkeys” and expect you to give me a thorough grounding in modern evolutionary theory in a few comments which I ignore.
You *are* not qualified to make the judgments you do about programming languages. No-one expects you to be. You’re not a Computer Scientist, and you haven’t spent years exploring different paradigms and hammered your head against the wall trying to wrap your brain around the subtleties of working in only objects, without any imperative functionality at all, or with nothing but recursion as your tools. Why would you when it won’t get you anything?
But a lot of people have, and the knowledge gleaned from doing that is nothing that can be conveyed in a few comments to someone who ignores the arguments made, or states that OO is just a “crutch” and refuses to explain why.
If you really want to blow your mind, try solving one of the programming problems you have solved in BASIC but using Lisp instead. Or Smalltalk. Those languages are both older than BASIC. Their paradigms and features predate BASIC. But they allow you to do things which many modern languages, like C# and VB.NET, will not allow you to do, and which BASIC will most definitely not let you do. But be prepared for that 2/3=0 is immensely straightforward and evident compared to much you will encounter in those languages.
It’s not about innovation and new features. It’s about something a lot deeper and more subtle than that. It’s about paradigms, a word which encompasses a lot more than most people think it does.
It would be prudent of me to add some words from Djikstra which are more constructive than merely bashing BASIC. One of his earlier hand written notes, published in 1972, contains a lot of still highly relevant observations. Modern languages keep adding bloat and complexity in the form of “features” instead of becoming better at managing and reducing complexity, and Djikstra has a few words to say on how to go about fixing this.
40 years later we’re still doing the opposite of what he suggests. It’s a short read (though much longer than a comment can be), and it does no more than scratch the surface.
http://userweb.cs.utexas.edu/users/EWD/transcriptions/EWD03xx/EWD340.html
What it does not expressively state, and what will take much reading to completely grok, is that power of expression and simplicity are not on either end of a scale, but are orthogonal. BASIC is simple and lacks power of expression. Lisp is even simpler and has magnitudes higher power of expression. C# is a lot more complex but does not have significantly more power of expression than for example VB.NET.
It’s just as simple as that all languages have the same basis and added “features”, and that any language, including BASIC, can simply have more complex functions grafted on and become equivalent in power of expression to languages which are fundamentally different.
Oh, and regarding “Visual basic has a lambda function”, it’s from here:
http://scienceblogs.com/gregladen/2010/03/whats_wrong_with_basic.php#comment-2345082
And that should of course be “it’s not just as simple as that”. That’s what I get for composing in a rush while doing other things.
Because Basic can fucking do division, dudes. Like 2/3. Python can’t.
The difference is that BASIC uses floating-point as its default numeric type. Most other programming languages default to integer arithmetic. That may have made sense in the 1970s, when the faster execution of integer arithmetic (with integers you need only deal with the mantissa; floating-point types also have an exponent) added up to noticeable levels over the course of a program run. Today, it’s often user hostile: your average user is accustomed to spreadsheets which also treat 2/3 as a floating point operation, not a trained programmer accustomed to thinking in terms of integer arithmetic.
The version of BASIC I used (for the Commodore 64) actually did provide an integer type, but I never used it. To make an integer variable you had to append a certain character (I think it was %) to the variable name, much as you had to append a $ to the name of a string variable. Having had no formal training at that point, I didn’t understand why someone would purposely choose the more restrictive type.
So his complaint is really just about integer vs. float division? o_O
If you’re talking embedded programming, or even just “Apps” meant to run on PDAs or mobile phones, it still makes sense. Those devices often have no hardware support for floating point arithmetics. Doing it in software is considerably slower. And yes, it does have an impact, very audible for example in audio decoding/generation.
There’s also the problem that floats, at least using the exponent+fraction representation (as used in most hardware floating point units), in general: They can’t represent every possible non-integer value. Often, it doesn’t matter, but in certain cases, those inaccuracies add up, screwing everything up. There’s different ways of storing float values, but they either suffer from similar problems or are significantly slower. Or both.
Actually his complaint is that an integer object returns an integer object as the result of mathematical manipulation of it. From the perspective of an OO language that’s exactly what one would expect.
DrMcCoy also has a valid point. Integer math is the norm in control system, real time response systems and consumer electronics, and that’s where most of today’s computers reside. Floating point math has a very limited applicability; it’s pretty much useless in control systems, finance, media decoders and the like. It’s mostly used where the inherent lack of both speed and precision is immaterial.
The Swede said:
“Floating point math has a very limited applicability; it’s pretty much useless in control systems, finance, media decoders and the like. It’s mostly used where the inherent lack of both speed and precision is immaterial.”
Or in scientific data processing which is where Greg is coming from…
Aren’t you overgeneralizing a bit? Or is there no market for some dumb little book like “Numerical recipes in C”
Most scientific data processing I encounter either uses specialized tools like Matlab or legacy modules in FORTRAN (I’ve had to wrap a LOT of those up in other code). Some use Numerical Python which seems to gain ground rapidly. Few use core languages like Python or BASIC for that kind of work.
Which doesn’t mean core languages can’t be used for that, only that it’s rare, since they’re generally ill suited.
Most scientific data processing I encounter either uses specialized tools like Matlab or legacy modules in FORTRAN
You definitely need to get out more. I see a fair amount of scientific data processing code (and have written some of it myself) which doesn’t fall into either of these two categories. Lots of people who write such code for a living still use FORTRAN, but it’s not the same FORTRAN that was used in the 1960s and 1970s. Others use C, which actually does quite well for a naive user, or some variant thereof.
Floating point useless in finance? Let me give you one equation: y = y0 * (1+r/100)^n. That’s the formula for compound interest over n periods with a percentage interest rate per period of r. I’m sure it’s possible to code an integer arithmetic version of that, but it’s much more straightforward to do it in floating point. Some applications, like bank balances, make sense as integers, but floating point arithmetic has its place.
Yes, floating point arithmetic has the limitation of not being able to represent every number exactly. There are standard techniques for minimizing that error and not quoting results to unjustified precision. That’s the whole reason behind double precision, and if that’s not good enough there are ways to implement arbitrary precision.
FORTRAN is what I’ve seen the most of, since I currently work in engineering and automation. There are the occasional solutions in C and VB6/VB.NET, but mostly it’s legacy code FORTRAN (which doesn’t mean it’s 1960’s FORTRAN, a lot of it is in more modern varieties) or in specialized applications, usually built around one of the math frameworks like LabView, Matlab or Maple. I’m certain my sample is biased though, since I work a lot with advanced test systems and don’t run into homebrew “spot solutions” much. Except the ones I make myself, using whatever tool I happen to be trying out at the time.
In banks you won’t find much floating point except in on the spot calculations (and those will often be in Excel or something similar). The major currency data types tend to be fixed point, because of the many issues with floating point. When I did data transactions between bank departments using floating point was a sure way to get a stern talking to, even if it wasn’t a firing offense like attaching a Linux system to the network was. Interesting times. There are exceptions of course, and (at least previously lucrative) business ideas built up around the limitations that cause.
Attaching a Linux system to the network was a firing offense?
Yep. One of my colleagues also almost got fired over moving a computer from one cubicle to another. It’s rather nuts.
Oh, and Mr. Laden, I’m prepared to accept your apology for insulting me repeatedly over being right, and for ignoring my argument and thereby claiming I have made none. Whenever you’re humble enough to admit you’ve been out of your depth the whole time.
I’m not holding my breath though. You appear convinced that the field of Computer Science holds no secrets to someone with your immense breadth and width of varied, professional programming experience in various paradigms and many substantially different languages, and that someone stating you do not understand arguments made is merely territorial and, what was it, a “sloth”.