Summary by Dan Luu on the question about whether for statically typed languages, objective advantages (like having measurably fewer bugs, or solving problems in measurably less time) can be shown.
If I think about this, authors of statically typed languages in general at their beginning might not even have claimed that they have such advantages. Originally, the objective advantage was that for computers like a PDP11 - which had initially only 4 K of memory and a 16-bit adress space - was that something like C or Pascal compilers could run on them at all, and even later C programs were much faster than Lisp programs of that time. At that time, it was also considered an attribute of the programming language whether code was compiled to machine instructions or interpreted.
Todays, with JIT compilation like in Java and the best implementation of Common Lisp like SBCL being at a stone’s throw of the performance of Java programs, this distinction is not so much relevant any more.
Further, opinions might have been biased by comparing C to memory-safe languages, in other words, when there were perceived actual productivity gains, the causes might have been confused.
The thing which seems more or less firm ground is that the less lines of code you need to write to cover a requirement, the fewer bugs it will have. So more concise/expressive languages do have an advantage.
There are people which have looked at all the program samples in the above linked benchmark game and have compared run-time performamce and size of the source code. This leads to interesting and sometimes really unintuitive insights - there are in fact large differences between code sizes for the same task between programming languages, and a couple of different languages like Scala, JavaScript, Racket(PLT Scheme) and Lua come out quite well for the ratio of size and performance.
But given all this, how can one assess productivity, or the time to get from definition of a task to a working program, at all?
And the same kind of questions arise for testing. Most people would agree nowadays that automated tests are worth their effort, that they improve quality / shorten the time to get something working / lead to fewer bugs. (A modern version of the Joel Test might have automated testing included, but, spoiler: >!Joel’s list does not contain it.!<)
Testing in small units also interacts positively with a “pure”, side-effect-free, or ‘functional’ programming style… with the caveat perhaps that this style might push complex I/O functions of a program to its periphery.
It feels more solid to have a complex program covered by tests, yes, but how can this be confirmed in an objective way? And if it can, for which kind of software is this valid? Are the same methodologies adequate for web programming as for industrial embedded devices or a text editor?
It feels more solid to have a complex program covered by tests, yes, but how can this be confirmed in an objective way? And if it can, for which kind of software is this valid? Are the same methodologies adequate for web programming as for industrial embedded devices or a text editor?
Worth noting here that tests should primarily serve as a (self-checking) specification, i.e. documentation for what the code is supposed to do.
The more competent your type checking is and the better the abstractions are, the less you need to rely on tests to find bugs in the initial version of the code. You might be able to write code, fix the compiler errors and then just have working code (assuming your assumptions match reality). You don’t strictly need tests for that.But you do need tests to document what the intended behaviour is and conversely which behaviours are merely accidental, so that you can still change the code after your initial working version.
In particular, tests also check the intended behaviour of all the code parts you might not have realized you’ve changed, so that you don’t need to understand the entire codebase every time you want to make a small change.In my experience, tests can be a useful complement to specifications but they do not substitute them - especially, specs can give a bigger picture and cover corner cases more succintly.
And there are many things that tests can check which the respective type systems can’t catch. For example, one can easily add assertions in C++ which verify that functions are not called in a thread-unsafe way, and tests can check them.
That’s a useful way to look at it, as verbose / extended documentation (amounts to exhaustive usage examples, if you’ve got thorough tests).
I don’t have a metric that’s quick to relate, but for me the…attractiveness or value in testing relates heavily to:
- Project lifecycle - longer and slower
->
more tests - Team size (really more like 1st derivative of team size…team “churn”?) - larger, changing faster
->
more tests
Both of these are influenced by your description of tests as docs. Onboarding new engineers is way, way easier with thorough tests, for the reasons you’ve mentioned. Plus it reduces that “gun shy” factor about making changes in a new codebase.
But it’s not always better. I’ve been writing less (few, honestly) the last year or so, sadly.
- Project lifecycle - longer and slower
An indisputable fact is that static typing and compilation virtually eliminate an entire class of runtime bugs that plague dynamically typed languages, and it’s not an insignificant class.
If you add a type checker to a dynamically typed language, you’ve re-invented a strongly typed, compiled language without adding any of the low hanging fruit gains of compiling.
Studies are important and informative; however, it’s really hard to fight against the Monte Carlo evidence of 60-ish years of organic evolution: there’s a reason why statically typed languages are considered more reliable and fast - it’s because they are. There isn’t some conspiracy to suppress Lisp.
you’ve re-invented a strongly typed, compiled language without adding any of the low hanging fruit gains of compiling.
This is aside from the main argument around static typing which claims advantages in developer productivity and eventual correctness of code.
Now, I think it is generally accepted that compilation with the help of static typing enables compilation to native code which leads to faster executables.
Now, did you know that sevral good Lisp and Scheme implementations like SBCL, Chez Scheme or Racket compile to native code, even when they are dynamically typed languages? This is done by type inference.
And the argument is not that these are as fast as C or Rust - the argument is that the difference might be significantly smaller than what many people believe.
Now, did you know that sevral good Lisp and Scheme implementations like SBCL, Chez Scheme or Racket compile to native code, even when they are dynamically typed languages? This is done by type inference.
Compiled or not, inferred or not (Go has type inference; most modern, compiled languages do), the importance of strong typing is that it detects typing errors at compile time, not at run time. Pushing inference into the compile phase also has performance benefits. If a program does type checking in advance of execution, it is by definition strongly typed.
So, if type checking at compile time is universally good, why are there (to my knowledge) no modern and widely used Languages, perhaps with the exception of Pascal and Ada, where all arrays or vectors have a size that is part of their type?
C, C++, and Rust come to mind as other languages with sizes as part of an array’s type. This is necessary for the compiler to know how much stack memory to reserve for the values, and other languages that only rely on dynamically sized arrays and lists allocate those on the heap (with a few exceptions, like C#'s mostly unknown
stackalloc
keyword).Maybe it depends on your definition of “part of”, but
a := make([]int, 5) len(a) == 5 len("hello") == 5
Arrays in Go have associated sizes.
But that’s beside the point; what does array size metadata have to do with strong typing?
I was not meaning slices or array size checks at runtime.
The array size can be part of the static type, that is one that can be checked at compile time. Rust, Pascal, and Ada have such Arrays.
But if static typing is always better, why are they rarely used?
There’s a false equivalency here. Array sizes have nothing to do with static typing. For evidence, look at your own words: if undisputed strongly typed languages don’t support X, then X probably doesn’t have anything to do with strong typing. You’re conflating constraints or contracts or specific type features with type theory.
On the topic of array sizes, are you suggesting that size isn’t part of the array type in Go? Or that the compiler can’t perform some size constraint checks at compile time? Are you suggesting that Rust can perform compile time array bounds checking for all code that uses arrays?
Are you suggesting that Rust can perform compile time array bounds checking for all code that uses arrays?
I’ll answer this question: no.
But it does make some optimizations around iterators and unnecessary bounds checks written in code at least.
But yes it does runtime bounds checking where necessary.
If a program does type checking in advance of execution, it is by definition strongly typed.
Look here, under “typing discipline”:
An indisputable fact is that static typing and compilation virtually eliminate an entire class of runtime bugs that plague dynamically typed languages, and it’s not an insignificant class.
Just another data point, for amusement: There is a widely known language that eliminated not only memory management bugs like “use after free”, but also data races and similar concurrency bugs, in 2007. No, I am not talking about Rust, but Clojure. It does prevent data races by using so-called persistent data structures. These are like Python’s strings (in that they can be input to operations but never change), but for all basic types of collections in Clojure, namely lists, vectors, dictionaries / hash maps, and sets.
And Clojure is dynamically typed. So, dynamically typed languages had that feature earlier. (To be fair, Rust did adopt its borrow checker from research languages which appeared earlier.)
“But”, you could say, “Java was invented in 1995 and had already memory safety! Surely that shows the advantage of statically typed languages?!”
Well, Lisp was invented in 1960, and had garbage collection and was hence memory-safe.
An indisputable fact is that static typing and compilation virtually eliminate an entire class of runtime bugs that plague dynamically typed languages, and it’s not an insignificant class.
Well, going back to the original article by Dan Luu and the literature he reviews, then why do we not see objective, reproducible advantages from this?
Partly because it’s from 2014, so the modern static typing renaissance was barely starting (TypeScript was only two years old; Rust hadn’t hit 1.0; Swift was mere months old). And partly because true evidence-based software research is very difficult (how can you possibly measure the impact of a programming language on a large-scale project without having different teams write the same project in different languages?) and it’s rarely even attempted.
Because it is hard to design a study that would capture it. Because it is hard to control many variables that affect the “bugs/LOC” variable.
there’s a reason why statically typed languages are considered more reliable and fast - it’s because they are. There isn’t some conspiracy to suppress Lisp.
Then why is the SBCL implementation of Common Lisp about as fast as modern Java? I linked the benchmarks.
Java is still interpreted. It compiles to bytecode for a virtual machine, which then executes a for a simulated CPU. The bytecode interpreter has gotten very good, and optimizes the bytecode as it runs; nearly every Java benchmark excluded warm-up because it takes time for the huge VM to load up and for the optimization code to analyze and settle.
Java is not the gold standard for statically typed compiled languages. It’s gotten good, but it barely competes with far younger, far less mature statically typed compiled languages.
You’re comparing a language that has existed since before C and has had decades of tuning and optimization, to a language created when Lisp was already venerable and which only started to get the same level of performance tuning decades after that. Neither of which can come close to Rust or D, which are practically infants. Zig is an infant; it’s still trying to be a complete language with a complete standard library, and it’s still faster than SBCL. Give it a decade and some focus on performance tuning, and it’ll leap ahead. SBCL is probably about a fast as it will ever get.
Java is still interpreted. It compiles to bytecode for a virtual machine, which then executes a for a simulated CPU. The bytecode interpreter has gotten very good, and optimizes the bytecode as it runs
Modern JVM implementations use just-in-time (JIT) compilation of bytecode to native machine code. That can be faster than C code which is optimized without profiling (because the optimizer gets relevant additional information), and for example in the Debian computer languages benchmark game, for numerically-intensive tasks it runs typically at about half the speed of the best and most heavily optimized C programs.
And now, I have a bummer: These most heavily optimized C programs are not written in idiomatic C. They are written with inline assembly, heavy use of compiler intrinsics and CPU -dependent code, manual loop unrolling and such.
TIL there’s such a thing as idiomatic C.
Jokes aside, microbenchmarks are not very useful, and even JS can compete in the right microbenchmark. In practice, C has the ability to give more performance in an application than Java or most other languages, but it requires way more work to do that, and it unrealistic for most devs to try to write the same applications in C that they would use Java to write.
But both are fast enough for most applications.
A more interesting comparison to me is Rust and C, where the compiler can make more guarantees at compile time and optimize around them than a C compiler can.
Honestly I’m surprised this is even still discussion. I plan to read this, but who out there is arguing against static typecheckers? And yes, I know it’s probably the verifiable NPCs on Twitter.
Notably, this article is from 2014.
Thanks. I couldn’t find a date on it.
As far as I know, there is still no scientific evidence that static type checking is generally better (in terms of developer productivity - not performance of a compiled program) by any objective measure.
There’s no scientific evidence that pissing in someone’s coffee is a bad idea, but it’s common sense not to do that.
You seem to be looking to apply the scientific method somewhere that it can’t be applied. You can’t scientifically prove that something is “better” than another thing because that’s not a measurable metric. You can scientifically prove that one code snippet has fewer bugs than another though, and there’s already mountains of evidence of static typing making code significantly less buggy on average.
If you want to use dynamic typing without any type hints or whatever, go for it. Just don’t ask me to contribute to unreadable gibberish - I do enough of that at work already dealing with broken Python codebases that don’t use type hints.
there’s already mountains of evidence of static typing making code significantly less buggy on average
What is this mountain of evidence? The only evidence I remember about bugginess of code across languages is that bug count correlates closely to lines of code no matter the language.
It’s not hard to find articles explaining the benefits of using TypeScript over JavaScript or type hints in Python over no type hints online. It’s so well known at this point that libraries now require type hints in Python (Pydantic, FastAPI, etc) or require TypeScript (Angular, etc) and people expect types in their libraries now. Even the docs for FastAPI explain the benefits of type hints, but it uses annotated types as well for things like dependencies.
But for a more written out article, Cloudflare’s discussion on writing their new proxy in Rust (which has one of the strictest type systems in commonly used software programming languages) and Discord’s article switching from Go to Rust come to mind. To quote Cloudflare:
In fact, Pingora crashes are so rare we usually find unrelated issues when we do encounter one. Recently we discovered a kernel bug soon after our service started crashing. We’ve also discovered hardware issues on a few machines, in the past ruling out rare memory bugs caused by our software even after significant debugging was nearly impossible.
You can scientifically prove that one code snippet has fewer bugs than another though, and there’s already mountains of evidence of static typing making code significantly less buggy on average.
Do you mean memory safety here? Because yes, for memory safety, this is proven. E.g. there are reports from Google that wide usage of memory-safe languages for new code reduces the number of bugs.
You can’t scientifically prove that something is “better” than another thing because that’s not a measurable metric.
Then, first, why don’t the claims that statically compiled languages come with claims on measurable, objective benefits? If they are really significantly better it should be easy to come up with such measures?
And the second thing: We have at least one large-scale experiment, because Google introduced Go und used it widely in its own company to replace Python.
Now, it is clear that programs in Go run with higher performance than Python, no question.
But did this lead to productivity increases or better code because of Go being a strongly-statically typed language ? I have seen no such report - in spite of that they now have 16 years of experience with it.
(And just for fun, Python itself is memory safe and concurrency bugs in Pyhton code can’t lead to undefined behaviour, like in C. Go is neither memory safe nor has it that level of concurrency safety: If you concurrently modify a hash table in two different threads, this will cause a crash.)
Do you mean memory safety here? Because yes, for memory safety, this is proven. E.g. there are reports from Google that wide usage of memory-safe languages for new code reduces the number of bugs.
Memory safety is such a broad term that I don’t even know where to begin with this. Memory safety is entirely orthogonal to typing though. But since you brought it up, Rust’s memory safety is only possible due to its type system encoding lifetimes into types. Other languages often use GCs and runtime checking of pointers to enforce it.
Then, first, why don’t the claims that statically compiled languages come with claims on measurable, objective benefits? If they are really significantly better it should be easy to come up with such measures?
Because nobody’s out there trying to prove one language is better than another. That would be pointless when the goal is to write functional software and deliver it to users.
I have seen no such report - in spite of that they now have 16 years of experience with it.
I have seen no report that states the opposite. Google switched to Go (and now partially to Rust). If they stuck with it, then that’s your report. They don’t really have a reason to go out and post their 16 year update on using Go because that’s not their business.
And just for fun, Python itself is memory safe and concurrency bugs in Pyhton code can’t lead to undefined behaviour, like in C.
Python does have implementation-defined behavior though, and it comes up sometimes as “well technically it’s undocumented but CPython does this”.
Also, comparing concurrency bugs in Python to those in C is wildly misleading - Python’s GIL prevents two code snippets from executing in parallel while C needs to coordinate shared access with the CPU, sometimes even reordering instructions if needed. These are two completely different tasks. Despite that, Rust is a low level language that is also “memory safe”, except to an extent beyond Python - it also prevents data races, unlike Python (which still has multithreading despite running only one thread at a time),
Go is neither memory safe…
?
…nor has it that level of concurrency safety
That’s, uh, Go’s selling point. It’s the whole reason people use it. It has built-in primitives for concurrent programming and a whole green threading model built around it.
If you concurrently modify a hash table in two different threads, this will cause a crash.
This is true in so many more languages than just Go. It’s not the case in Python though because you can’t concurrently modify a hash table there. The crash is a feature, not a bug. It’s the runtime telling you that you dun goof’d and need to use a different data structure for the job to avoid a confusing data race.
How is Go not memory safe? Having escape hatches does not count, all the safe languages have those.
As I already said: If you access and write to the same hash map from two different threads, this can cause a crash. And generally, if you concurrently access objects in Go, you need to use proper locking (or communication by channels), otherwise you will get race conditions, which can result in crashes. Ctrl-f “concurrency” here.This is different from, for example, Java or Python where fundamental objects always stay in a consistent state, by guarantee of the run time.
That is a consequence of having parallelism - all mainstream pre-Rust memory safe languages with parallelism suffer from this issue, they are still generally regarded as memory safe. I don’t know where you got that Java does not have this issue, you need to know to use the parallelism-safe data types where necessary.
In Java, this wouldn’t cause a crash or an incorrect behaviour of the runtime. Java guarantees that. One still needs locking to keep grouped changes in sync and ordering of multiple operations consistent, but not like in Go, C, or C++.
Also, it is certainly possible to implement the “shared access xor mutating access” principle that Rust implements, in a dynamically typed language. Most likely this won’t come with the performance guarantees of Rust, but, hey, Python is 50 times slower than C and it’s widely used.
If you want to use dynamic typing without any type hints or whatever, go for it.
Oh, I didn’t say that I want that, or recommend to use only dynamic languages. For my part, I use a mix, and a big part of the reason is that it can, depending on circumstances, be faster to develop something with dynamic languages.
Especially, the statement “there is no scientific and objective proof that using statically typed languages is generally better by some measure (except speed)”, this does not mean that dynamically typed languages are generally or objectively better.
You’d be surprised. Every time I try to introduce static type hints to Python code at work there are some weirdos that think it’s a bad idea.
I think a lot of the time it’s people using Vim or Emacs or whatever so they don’t see half the benefits.
As far as I know, there is still no scientific evidence that static type checking is generally better by any objective measure.
Every time I try to introduce static type hints to Python code at work there are some weirdos that think it’s a bad idea.
Saying that there is no objective evidence for something does not mean it is necessarily always a bad idea.
Myself, I’ve used C, C++, Go, Rust, Java, Python, and Python plus mypy at work, for example in areas like signal processing and research or industrial reak-time systems, as well as e.g. Forth, and have used Clojure, as well as Pascal, Common Lisp, Racket and Scheme at home, while also trying e.g. Scala and C#.
I think that dynamic languages have their place and that using them can be significantly quicker.
I tend to prefer statically typed languages when performance and / or strict real-time capabilities are an essential concern. But I have more than once combined them with tests in s dynamic language, like Python - because these are far quicker to write. At one point back in 2002, I have estimated that porting certain research code from C to Python took 1/14 of the time and lines of code compared to the original algorithm in C.
Also, there are areas which are better covered by tests than by the type systems I know. For example, while some languages like C, Rust or Pascal do have arrays where the array size is part of the type, many array processing packages like blitz++, Eigen, Numpy make array size and dimensions a dynamic property. And Rust does the same with vectors, which is usually the recommended way !
So, to sum it up, it is probably an area where blanket statements are not helpful.
To be fair, Python doesn’t have type inference, so the type hints are quite obnoxious, and the type checkers for Python are also not as reliable as in languages with native static typing (even if that is often caused by libraries not providing type hints).
I disagree. Pyright has pretty reasonable inference - typically it’s only containers where you need to add explicit annotations and it feels like you shouldn’t have to - and it is extremely reliable. Basically as good as Typescript.
Mypy is trash though. Maybe you’ve only used that?
Yeah, I only used Mypy so far. I thought it was inherent to the language spec, not the type checker.
I guess, some of your colleagues might have also just used Mypy so far, though…
Nah Python is weird about static types - it only defines the syntax, not the semantics.
Ditch Mypy. Pyright is much much much better.
Python 3.13 has really good support for typing now and while it doesn’t support more complex types, opting out with
typing.Any
or# type: ignore
still works.Type hints have been largely helpful from my experience, especially when the code itself came from someone else and is incomprehensible otherwise.
Oh yeah, I’m in favor of using type hints. People here are also saying that Pyright is more useful, which I haven’t used yet.
I was involved as the unlucky bastard supposed to sprinkle software engineering on top of a Python project 2+ years ago, and I just remember how much I understood right then and there why Python devs thought static typing was cumbersome. Felt like I had to write Java again, except it also didn’t work half the time.
Can also confirm - Pyright is a god send.
Type inference is a feature of the type checker, not of Python itself. I’m fairly sure the type checker that’s being developed by the Astral team will have proper inference, and I’ve also had good experiences with pyright in the past.
Though it doesn’t come close to e.g. Typescript, which is a shame - Python could really use the advanced dynamic type checking features TS has.
Interesting. If it’s only in the type checker, can IDEs/editors correctly show the type information of inferred types then? Do they call the type checker themselves to retrieve that info?
If it’s only in the type checker, can IDEs/editors correctly show the type information of inferred types then?
Yep, and some (e.g. Pycharm) do. They have to be a bit careful with not assuming too much since lots of legacy code is written in fairly terrible ways, so e.g. default parameter values don’t necessarily set the type of their respective parameters, but it’s definitely possible and mostly a choice by the editor/IDE.
Do they call the type checker themselves to retrieve that info?
Depends on the editor! Pycharm is built on a custom engine by Jetbrains, whereas e.g. the Python VS code plugin by Microsoft (Pylance) is based on Microsofts type checker (pyright).
Note that this post is from 2014.
So what scientific evidence has emerged in the mean time?
We know with reasonable certainty that memory-safety reduces memory bugs. This is valid for dynamically and statically typed languages.
However, under the assumption that dynamically typed programs do have a minimum amount of tests, we can’t say that static type checking is generally a better or more efficient approach.
I don’t know; I haven’t caught up on the research over the past decade. But it’s worth noting that this body of evidence is from before the surge in popularity of strongly typed languages such as Swift, Rust, and TypeScript. In particular, mainstream “statically typed” languages still had
null
values rather thanOption
orMaybe
.The original author does mention that they want to try using rust when it becomes more stable.
This is why any published work needs a date annotation.
Do you mean Dan Luu, or one of the studies reviewed in the post?
I don’t know; I haven’t caught up on the research over the past decade. But it’s worth noting that this body of evidence is from before the surge in popularity of strongly typed languages such as Swift, Rust, and TypeScript.
Well, Lisp, Scheme and many more are strongly typed as well. The difference here is they are dynamically-strongly typed, where the evaluation acts as-if all types are not evaluated before run time.
This means essentially, that the type of a variable can change over its run time. And this is less relevant for functional or expression-oriented languages like Scheme, Scala or Rust, where a variable is in most cases rather a label for an expression and does not change its value at all.
In particular, mainstream “statically typed” languages still had
null
values rather thanOption
orMaybe
.That again is more a feature of functional languages, where most things evaluate to expressions. Clojure is an example for this, it is dynamically - strongly typed and in spite of that it runs on the JVM, it does not raise NullPointerExeptions (the exception, so to speak, is when calling into Java).
And in most cases, said languages use type inference and also garbage collection (except Rust of course). This in turn results of course in clear ergonomic advantages, but they have little to do with static or dynamic typing.
Yeah, I understand that Option and Maybe aren’t new, but they’ve only recently become popular. IIRC several of the studies use Java, which is certainly safer than C++ and is technically statically typed, but in my opinion doesn’t do much to help ensure correctness compared to Rust, Swift, Kotlin, etc.
My conclusion is that it is hard to empirically prove that “static type systems improve developer productivity” or “STS reduce number of bugs” or any similar claim. Not because it looks like it is not true, but because it is hard to control for the many factors that influence these variables.
Regardless of anyone’s opinion on static/dynamic, I think we still must call this an “open question”.
I think we still must call this an “open question”.
Not sure I agree. I do think you’re right - it’s hard to prove these things because it’s fundamentally hard to prove things involving people, and also because most of the advantages of static types are irrelevant for tiny programs which is what many studies use.
But I don’t think that means you can’t use your judgement about it and come to a conclusion. Especially with languages like Python and Typescript that allow an
any
cop-out, it’s hard to see how anyone could really conclude that they aren’t better.Here’s another example I came across recently: should bitwise
&
have lower precedence than==
like it does in C? Experience has told us that the answer is definitely no, and virtually every modern language puts them the other way around. Is it an open question? No. Did anyone prove this? Also no.Well, you can conclude anything using your reasoning, but that does give the high degree of certainty that is sought after in the studies reviewed in the article.
Again, I’m not saying that I don’t believe static type checkers are beneficial, I’m just saying we cannot say that for sure.
It’s like saying seat belts improve crash fatality rates. The claim seems plausible and you can be a paramedic to see the effects of seat belts first-hand and form a strong opinion on the matter. But still, we need studies to inspect the impact under scrutiny. We need studies in controlled environments to control for things like driver speed and exact crash scenarios, we need open studies to confirm what we expect really is happening on a larger scale.
Same holds for static type checkers. We are paramedics, who see that we should all be wearing seat belts of type annotations. But it might be that we are some subset of programmers dealing with problems that benefit from static type checking much more than average programmer. Or there might be some other hidden variable, that we cannot see, because we only see results of code we personally write.
No I disagree. There are some things that it’s really infeasible to use the scientific method for. You simply can’t do an experiment for everything.
A good example is UBI. You can’t do a proper experiment for it because that would involve finding two similar countries and making one use UBI for at least 100 years. Totally impossible.
But that doesn’t mean you just give up and say “well then we can’t know anything at all about it”.
Or closer to programming: are comments a good idea, or should programming languages not support comments? Pretty obvious answer right? Where’s the scientific study?
Was default case fallthrough a mistake? Obviously yes. Did anyone ever do a study on it? No.
You don’t always need a scientific study to know things to a reasonable certainty and often you can’t do that.
That said I did see one really good study that shows Typescript catches about 15% of JavaScript bugs. So we don’t have nothing.
Just because we cannot prove something, doesn’t mean that we can treat strong claims the same way as proven hypnosis. If we cannot prove that UBI is overall beneficial, we just cannot believe it with the same certainty that we would if we had a bunch of studys on our side.
Look, I’m not saying that we have nothing - I’m just saying that what we have are educated guesses, not proven facts. Maybe “open question” was too strong of a term.
Thanks for posting this! That blogsite is always very productive reading.
I don’t get the obsession with the scientific method. Movie quote: “we don’t live in the courtroom, your Honor; do we?”. You can eliminate outliers like experts and students, hobby projects and lives-at-stake projects; everything you are left with is a good reflection of the industry. Example: any study with Java versus Python has to count.
I have no real experience with dynamic languages, so I can understand where the blog responses about dynamic languages having extra bugs come from. But they miss the important point about dynamic languages allowing for non-trivial solutions in fewer lines of code - matching that would basically need <AnyType> to be implemented in the static language, with the accompanying code bloat, under-implementation, bugginess, etc.
I think the reference to 0install’s porting is real experience of the difference between Python and OCaml. Caveat: author Thomas Leonard is a seriously expert practitioner.
I don’t get the obsession with the scientific method.
Because actually knowing and understanding something is better than believing stuff. There is a lot of stuff that sounds plausible but is wrong.
I submit that laboratory-experiment-based understanding being valid in real-world use, in any domain, is itself a belief rather than knowledge. And in such an unstructured domain as software development, it is even less likely.
I submit that laboratory-experiment-based understanding being valid in real-world use, in any domain, is itself a belief rather than knowledge
I dunnow, man. Did you use a plane recently? A computer? Something that contained electronics, like transistors? GPS? A weather forecast? All these are based in things like fluid physics, particle physics, quantum physics, electrodynamics, mathematics, and so on. Our modern world would simply not exist without it.
Granted, there are areas where applying the scientific method is harder. But we still do, for example in medicine. Why should this not be possible in software development?
That sounds like a laundry list of tech thrown together for effect. It is not even relevant. You are talking empirically-proven tech as a counterpoint to laboratory-only experiments, aren’t you?
I don’t know about the Wright brothers, but the human-powered flight bounty was apparently won using the strategy of fast iteration to empirically identify the solution. GPS too would have been built on real-world feedback iterations.
Computing hardware is a special case, where they replicate the laboratory into a billion-dollar structure and call it … ‘fab’ ;-)
The scientific method shorn of contact with reality, like with most research nowadays and especially in medicine, is just for show.
I know far too little about compilers & interpreters to have anything to say about performance so I’ll leave that subject to wiser programmers.
What I can say about the usage itself of dynamically vs statically typed languages is that I struggle with assessments that attempt to quantify the differences between the two paradigms. I’ve come to consider programming has a craft, and as such the qualitative properties of the tools, and especially the languages, matter significantly.
I’ve been switching back and forth between dynamic and static languages lately. Although dynamic languages do feel more straight to the point, static languages are easier to navigate through. All that typing information can be harnessed by intellisense and empower the non-linear reading needed to understand a program. That’s valuable for the whole life cycle of the software, not just the time to reach initial release. It’s kind of a rigid vs fluid dichotomy.
There are also in-between approaches. Python for example has optional typing (for type checks with mypy), whereas Lisp/SBCL allows type hints for performance. Racket and Clojure allow to add types as pre-conditions (Typed Racket and Clojure Soec).
And many modern languages like Scala or Rust mostly need types in the function signature - the rest of the time, types are usually inferred. Even languages which were rigorously typed in the past, like C++, have the auto keyword added which activates type inference.
Oh I love it when the language has advanced type inference! I have fond memories of tinkering with Haxe.
Hindley-milner type inference for the win!
It’s hard to implement, but the result is a statically typed language, mostly without type annotations.
Rust and OCaml use this.
Will read, looks interesting and I already agree with the premise but,
please people, add metadata (date, author, institution) and a bit of formatting to your pages. Not much, 10 lines of global CSS would help already.
Or post to gemini instead