Summary by Dan Luu on the question about whether for statically typed languages, objective advantages (like having measurably fewer bugs, or solving problems in measurably less time) can be shown.

If I think about this, authors of statically typed languages in general at their beginning might not even have claimed that they have such advantages. Originally, the objective advantage was that for computers like a PDP11 - which had initially only 4 K of memory and a 16-bit adress space - was that something like C or Pascal compilers could run on them at all, and even later C programs were much faster than Lisp programs of that time. At that time, it was also considered an attribute of the programming language whether code was compiled to machine instructions or interpreted.

Todays, with JIT compilation like in Java and the best implementation of Common Lisp like SBCL being at a stone’s throw of the performance of Java programs, this distinction is not so much relevant any more.

Further, opinions might have been biased by comparing C to memory-safe languages, in other words, when there were perceived actual productivity gains, the causes might have been confused.

The thing which seems more or less firm ground is that the less lines of code you need to write to cover a requirement, the fewer bugs it will have. So more concise/expressive languages do have an advantage.

There are people which have looked at all the program samples in the above linked benchmark game and have compared run-time performamce and size of the source code. This leads to interesting and sometimes really unintuitive insights - there are in fact large differences between code sizes for the same task between programming languages, and a couple of different languages like Scala, JavaScript, Racket(PLT Scheme) and Lua come out quite well for the ratio of size and performance.

But given all this, how can one assess productivity, or the time to get from definition of a task to a working program, at all?

And the same kind of questions arise for testing. Most people would agree nowadays that automated tests are worth their effort, that they improve quality / shorten the time to get something working / lead to fewer bugs. (A modern version of the Joel Test might have automated testing included, but, spoiler: >!Joel’s list does not contain it.!<)

Testing in small units also interacts positively with a “pure”, side-effect-free, or ‘functional’ programming style… with the caveat perhaps that this style might push complex I/O functions of a program to its periphery.

It feels more solid to have a complex program covered by tests, yes, but how can this be confirmed in an objective way? And if it can, for which kind of software is this valid? Are the same methodologies adequate for web programming as for industrial embedded devices or a text editor?

  • HaraldvonBlauzahn@feddit.orgOP
    link
    fedilink
    arrow-up
    1
    ·
    2 days ago

    So, if type checking at compile time is universally good, why are there (to my knowledge) no modern and widely used Languages, perhaps with the exception of Pascal and Ada, where all arrays or vectors have a size that is part of their type?

    • TehPers@beehaw.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      C, C++, and Rust come to mind as other languages with sizes as part of an array’s type. This is necessary for the compiler to know how much stack memory to reserve for the values, and other languages that only rely on dynamically sized arrays and lists allocate those on the heap (with a few exceptions, like C#'s mostly unknown stackalloc keyword).

    • Maybe it depends on your definition of “part of”, but

      a := make([]int, 5)
      len(a) == 5
      len("hello") == 5
      

      Arrays in Go have associated sizes.

      But that’s beside the point; what does array size metadata have to do with strong typing?

      • HaraldvonBlauzahn@feddit.orgOP
        link
        fedilink
        arrow-up
        1
        arrow-down
        2
        ·
        edit-2
        1 day ago

        I was not meaning slices or array size checks at runtime.

        The array size can be part of the static type, that is one that can be checked at compile time. Rust, Pascal, and Ada have such Arrays.

        But if static typing is always better, why are these types rarely used?

        • There’s a false equivalency here. Array sizes have nothing to do with static typing. For evidence, look at your own words: if undisputed strongly typed languages don’t support X, then X probably doesn’t have anything to do with strong typing. You’re conflating constraints or contracts or specific type features with type theory.

          On the topic of array sizes, are you suggesting that size isn’t part of the array type in Go? Or that the compiler can’t perform some size constraint checks at compile time? Are you suggesting that Rust can perform compile time array bounds checking for all code that uses arrays?

          • TehPers@beehaw.org
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 day ago

            Are you suggesting that Rust can perform compile time array bounds checking for all code that uses arrays?

            I’ll answer this question: no.

            But it does make some optimizations around iterators and unnecessary bounds checks written in code at least.

            But yes it does runtime bounds checking where necessary.

            • HaraldvonBlauzahn@feddit.orgOP
              link
              fedilink
              arrow-up
              2
              ·
              1 day ago

              Actually, Rust checks arrays, which have a static size, at compile time, and slices and vectors, which have a dynamic size, at run time.

              • TehPers@beehaw.org
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                15 hours ago

                Rust does not check arrays at compile time if it cannot know the index at compile time, for example in this code:

                fn get_item(arr: [i32; 10]) -> i32 {
                    let idx = get_from_user();
                    arr[idx] // runtime bounds check
                }
                

                When it can know the index at compile time, it omits the bounds check, and iterators are an example of that. But Rust cannot always omit a bounds check. Doing so could lead to a buffer overflow/underflow, which violates Rust’s rules for safe code.

                Edit: I should also add, but the compiler also makes optimizations around slices and vectors at compile time if it statically knows their sizes. Blanket statements here around how it optimizes will almost always be incorrect - it’s smarter than you think, but not as smart as you think at the same time.