Summary by Dan Luu on the question about whether for statically typed languages, objective advantages (like having measurably fewer bugs, or solving problems in measurably less time) can be shown.

If I think about this, authors of statically typed languages in general at their beginning might not even have claimed that they have such advantages. Originally, the objective advantage was that for computers like a PDP11 - which had initially only 4 K of memory and a 16-bit adress space - was that something like C or Pascal compilers could run on them at all, and even later C programs were much faster than Lisp programs of that time. At that time, it was also considered an attribute of the programming language whether code was compiled to machine instructions or interpreted.

Todays, with JIT compilation like in Java and the best implementation of Common Lisp like SBCL being at a stone’s throw of the performance of Java programs, this distinction is not so much relevant any more.

Further, opinions might have been biased by comparing C to memory-safe languages, in other words, when there were perceived actual productivity gains, the causes might have been confused.

The thing which seems more or less firm ground is that the less lines of code you need to write to cover a requirement, the fewer bugs it will have. So more concise/expressive languages do have an advantage.

There are people which have looked at all the program samples in the above linked benchmark game and have compared run-time performamce and size of the source code. This leads to interesting and sometimes really unintuitive insights - there are in fact large differences between code sizes for the same task between programming languages, and a couple of different languages like Scala, JavaScript, Racket(PLT Scheme) and Lua come out quite well for the ratio of size and performance.

But given all this, how can one assess productivity, or the time to get from definition of a task to a working program, at all?

And the same kind of questions arise for testing. Most people would agree nowadays that automated tests are worth their effort, that they improve quality / shorten the time to get something working / lead to fewer bugs. (A modern version of the Joel Test might have automated testing included, but, spoiler: >!Joel’s list does not contain it.!<)

Testing in small units also interacts positively with a “pure”, side-effect-free, or ‘functional’ programming style… with the caveat perhaps that this style might push complex I/O functions of a program to its periphery.

It feels more solid to have a complex program covered by tests, yes, but how can this be confirmed in an objective way? And if it can, for which kind of software is this valid? Are the same methodologies adequate for web programming as for industrial embedded devices or a text editor?

  • Ephera@lemmy.ml
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 days ago

    It feels more solid to have a complex program covered by tests, yes, but how can this be confirmed in an objective way? And if it can, for which kind of software is this valid? Are the same methodologies adequate for web programming as for industrial embedded devices or a text editor?

    Worth noting here that tests should primarily serve as a (self-checking) specification, i.e. documentation for what the code is supposed to do.
    The more competent your type checking is and the better the abstractions are, the less you need to rely on tests to find bugs in the initial version of the code. You might be able to write code, fix the compiler errors and then just have working code (assuming your assumptions match reality). You don’t strictly need tests for that.

    But you do need tests to document what the intended behaviour is and conversely which behaviours are merely accidental, so that you can still change the code after your initial working version.
    In particular, tests also check the intended behaviour of all the code parts you might not have realized you’ve changed, so that you don’t need to understand the entire codebase every time you want to make a small change.

    • HaraldvonBlauzahn@feddit.orgOP
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      1 day ago

      In my experience, tests can be a useful complement to specifications but they do not substitute them - especially, specs can give a bigger picture and cover corner cases more succintly.

      And there are many things that tests can check which the respective type systems can’t catch. For example, one can easily add assertions in C++ which verify that functions are not called in a thread-unsafe way, and tests can check them.

      • expr@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        4 hours ago

        With your example, there are a number of languages that can statically prevent thread safety issues entirely, so that’s not actually a good example of something a type system can’t catch.

        To be honest, there’s much more that can be statically enforced by a type system than what C++ is capable of. With a sufficiently powerful type system, it tends to become more about tradeoffs in ergonomics and type safety.

    • PolarKraken@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      That’s a useful way to look at it, as verbose / extended documentation (amounts to exhaustive usage examples, if you’ve got thorough tests).

      I don’t have a metric that’s quick to relate, but for me the…attractiveness or value in testing relates heavily to:

      • Project lifecycle - longer and slower -> more tests
      • Team size (really more like 1st derivative of team size…team “churn”?) - larger, changing faster -> more tests

      Both of these are influenced by your description of tests as docs. Onboarding new engineers is way, way easier with thorough tests, for the reasons you’ve mentioned. Plus it reduces that “gun shy” factor about making changes in a new codebase.

      But it’s not always better. I’ve been writing less (few, honestly) the last year or so, sadly.