Summary by Dan Luu on the question about whether for statically typed languages, objective advantages (like having measurably fewer bugs, or solving problems in measurably less time) can be shown.

If I think about this, authors of statically typed languages in general at their beginning might not even have claimed that they have such advantages. Originally, the objective advantage was that for computers like a PDP11 - which had initially only 4 K of memory and a 16-bit adress space - was that something like C or Pascal compilers could run on them at all, and even later C programs were much faster than Lisp programs of that time. At that time, it was also considered an attribute of the programming language whether code was compiled to machine instructions or interpreted.

Todays, with JIT compilation like in Java and the best implementation of Common Lisp like SBCL being at a stone’s throw of the performance of Java programs, this distinction is not so much relevant any more.

Further, opinions might have been biased by comparing C to memory-safe languages, in other words, when there were perceived actual productivity gains, the causes might have been confused.

The thing which seems more or less firm ground is that the less lines of code you need to write to cover a requirement, the fewer bugs it will have. So more concise/expressive languages do have an advantage.

There are people which have looked at all the program samples in the above linked benchmark game and have compared run-time performamce and size of the source code. This leads to interesting and sometimes really unintuitive insights - there are in fact large differences between code sizes for the same task between programming languages, and a couple of different languages like Scala, JavaScript, Racket(PLT Scheme) and Lua come out quite well for the ratio of size and performance.

But given all this, how can one assess productivity, or the time to get from definition of a task to a working program, at all?

And the same kind of questions arise for testing. Most people would agree nowadays that automated tests are worth their effort, that they improve quality / shorten the time to get something working / lead to fewer bugs. (A modern version of the Joel Test might have automated testing included, but, spoiler: >!Joel’s list does not contain it.!<)

Testing in small units also interacts positively with a “pure”, side-effect-free, or ‘functional’ programming style… with the caveat perhaps that this style might push complex I/O functions of a program to its periphery.

It feels more solid to have a complex program covered by tests, yes, but how can this be confirmed in an objective way? And if it can, for which kind of software is this valid? Are the same methodologies adequate for web programming as for industrial embedded devices or a text editor?

  • TehPers@beehaw.org
    cake
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    2 days ago

    There’s no scientific evidence that pissing in someone’s coffee is a bad idea, but it’s common sense not to do that.

    You seem to be looking to apply the scientific method somewhere that it can’t be applied. You can’t scientifically prove that something is “better” than another thing because that’s not a measurable metric. You can scientifically prove that one code snippet has fewer bugs than another though, and there’s already mountains of evidence of static typing making code significantly less buggy on average.

    If you want to use dynamic typing without any type hints or whatever, go for it. Just don’t ask me to contribute to unreadable gibberish - I do enough of that at work already dealing with broken Python codebases that don’t use type hints.

    • Life is Tetris@leminal.space
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      1 day ago

      there’s already mountains of evidence of static typing making code significantly less buggy on average

      What is this mountain of evidence? The only evidence I remember about bugginess of code across languages is that bug count correlates closely to lines of code no matter the language.

      • TehPers@beehaw.org
        cake
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 day ago

        It’s not hard to find articles explaining the benefits of using TypeScript over JavaScript or type hints in Python over no type hints online. It’s so well known at this point that libraries now require type hints in Python (Pydantic, FastAPI, etc) or require TypeScript (Angular, etc) and people expect types in their libraries now. Even the docs for FastAPI explain the benefits of type hints, but it uses annotated types as well for things like dependencies.

        But for a more written out article, Cloudflare’s discussion on writing their new proxy in Rust (which has one of the strictest type systems in commonly used software programming languages) and Discord’s article switching from Go to Rust come to mind. To quote Cloudflare:

        In fact, Pingora crashes are so rare we usually find unrelated issues when we do encounter one. Recently we discovered a kernel bug soon after our service started crashing. We’ve also discovered hardware issues on a few machines, in the past ruling out rare memory bugs caused by our software even after significant debugging was nearly impossible.

    • HaraldvonBlauzahn@feddit.orgOP
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      1 day ago

      You can scientifically prove that one code snippet has fewer bugs than another though, and there’s already mountains of evidence of static typing making code significantly less buggy on average.

      Do you mean memory safety here? Because yes, for memory safety, this is proven. E.g. there are reports from Google that wide usage of memory-safe languages for new code reduces the number of bugs.

      You can’t scientifically prove that something is “better” than another thing because that’s not a measurable metric.

      Then, first, why don’t the claims that statically compiled languages come with claims on measurable, objective benefits? If they are really significantly better it should be easy to come up with such measures?

      And the second thing: We have at least one large-scale experiment, because Google introduced Go und used it widely in its own company to replace Python.

      Now, it is clear that programs in Go run with higher performance than Python, no question.

      But did this lead to productivity increases or better code because of Go being a strongly-statically typed language ? I have seen no such report - in spite of that they now have 16 years of experience with it.

      (And just for fun, Python itself is memory safe and concurrency bugs in Pyhton code can’t lead to undefined behaviour, like in C. Go is neither memory safe nor has it that level of concurrency safety: If you concurrently modify a hash table in two different threads, this will cause a crash.)

      • TehPers@beehaw.org
        cake
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        23 hours ago

        Do you mean memory safety here? Because yes, for memory safety, this is proven. E.g. there are reports from Google that wide usage of memory-safe languages for new code reduces the number of bugs.

        Memory safety is such a broad term that I don’t even know where to begin with this. Memory safety is entirely orthogonal to typing though. But since you brought it up, Rust’s memory safety is only possible due to its type system encoding lifetimes into types. Other languages often use GCs and runtime checking of pointers to enforce it.

        Then, first, why don’t the claims that statically compiled languages come with claims on measurable, objective benefits? If they are really significantly better it should be easy to come up with such measures?

        Because nobody’s out there trying to prove one language is better than another. That would be pointless when the goal is to write functional software and deliver it to users.

        I have seen no such report - in spite of that they now have 16 years of experience with it.

        I have seen no report that states the opposite. Google switched to Go (and now partially to Rust). If they stuck with it, then that’s your report. They don’t really have a reason to go out and post their 16 year update on using Go because that’s not their business.

        And just for fun, Python itself is memory safe and concurrency bugs in Pyhton code can’t lead to undefined behaviour, like in C.

        Python does have implementation-defined behavior though, and it comes up sometimes as “well technically it’s undocumented but CPython does this”.

        Also, comparing concurrency bugs in Python to those in C is wildly misleading - Python’s GIL prevents two code snippets from executing in parallel while C needs to coordinate shared access with the CPU, sometimes even reordering instructions if needed. These are two completely different tasks. Despite that, Rust is a low level language that is also “memory safe”, except to an extent beyond Python - it also prevents data races, unlike Python (which still has multithreading despite running only one thread at a time),

        Go is neither memory safe…

        ?

        …nor has it that level of concurrency safety

        That’s, uh, Go’s selling point. It’s the whole reason people use it. It has built-in primitives for concurrent programming and a whole green threading model built around it.

        If you concurrently modify a hash table in two different threads, this will cause a crash.

        This is true in so many more languages than just Go. It’s not the case in Python though because you can’t concurrently modify a hash table there. The crash is a feature, not a bug. It’s the runtime telling you that you dun goof’d and need to use a different data structure for the job to avoid a confusing data race.

        • HaraldvonBlauzahn@feddit.orgOP
          link
          fedilink
          arrow-up
          1
          ·
          17 hours ago

          Memory safety is entirely orthogonal to typing though.

          Well, is it possible that perhaps the benefits of Rust’s memory safety are confused to be benefits of static typing?

          • TehPers@beehaw.org
            cake
            link
            fedilink
            English
            arrow-up
            2
            ·
            14 hours ago

            Rust’s memory safety guarantees only work for Rust due to its type system, but another language could also make the same guarantees with a higher runtime cost. For example, a theoretical Python without a GIL (so 3.13ish) that also treated all mutable non-thread-local values as reentrant locks and required you to lock on them before read or write would be able to make the same kinds of guarantees. Similarly, a Python that disallowed coroutines and threading and only supported multiprocessing could offer similar guarantees.

        • HaraldvonBlauzahn@feddit.orgOP
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          1 day ago

          As I already said: If you access and write to the same hash map from two different threads, this can cause a crash. And generally, if you concurrently access objects in Go, you need to use proper locking (or communication by channels), otherwise you will get race conditions, which can result in crashes. Ctrl-f “concurrency” here.This is different from, for example, Java or Python where fundamental objects always stay in a consistent state, by guarantee of the run time.

          • unique_hemp@discuss.tchncs.de
            link
            fedilink
            arrow-up
            3
            ·
            1 day ago

            That is a consequence of having parallelism - all mainstream pre-Rust memory safe languages with parallelism suffer from this issue, they are still generally regarded as memory safe. I don’t know where you got that Java does not have this issue, you need to know to use the parallelism-safe data types where necessary.

            • HaraldvonBlauzahn@feddit.orgOP
              link
              fedilink
              arrow-up
              2
              ·
              edit-2
              18 hours ago

              In Java, this wouldn’t cause a crash or an incorrect behaviour of the runtime. Java guarantees that. One still needs locking to keep grouped changes in sync and ordering of multiple operations consistent, but not like in Go, C, or C++.

              Also, it is certainly possible to implement the “shared access xor mutating access” principle that Rust implements, in a dynamically typed language. Most likely this won’t come with the performance guarantees of Rust, but, hey, Python is 50 times slower than C and it’s widely used.

    • HaraldvonBlauzahn@feddit.orgOP
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      18 hours ago

      If you want to use dynamic typing without any type hints or whatever, go for it.

      Oh, I didn’t say that I want that, or recommend to use only dynamic languages. For my part, I use a mix, and a big part of the reason is that it can, depending on circumstances, be faster to develop something with dynamic languages.

      Especially, the statement “there is no scientific and objective proof that using statically typed languages is generally better by some measure (except speed)”, this does not mean that dynamically typed languages are generally or objectively better.