HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
6th HLF – Lecture: Leslie Lamport

Heidelberg Laureate Forum · Youtube · 253 HN points · 0 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention Heidelberg Laureate Forum's video "6th HLF – Lecture: Leslie Lamport".
Youtube Summary
Leslie Lamport: "If You're Not Writing a Program, Don't Use a Programming Language"

Algorithms are not programs. They can and should be written with math rather than programming languages or pseudo-languages. This applies to many more algorithms than the ones taught in algorithm courses.

This video is also available on another stream:
http://hitsmediaweb.h-its.org/Mediasite/Play/abdee6c8696a48288ce8c3e004c50cd41d?playFrom=13826&autoStart=false&popout=true


The opinions expressed in this video do not necessarily reflect the views of the Heidelberg Laureate Forum Foundation or any other person or associated institution involved in the making and distribution of the video.


More information to the Heidelberg Laureate Forum:

Website: http://www.heidelberg-laureate-forum.org/
Facebook: https://www.facebook.com/HeidelbergLaureateForum
Twitter: https://twitter.com/hlforum
Flickr: https://www.flickr.com/hlforum
More videos from the HLF: https://www.youtube.com/user/LaureateForum
Blog: https://scilogs.spektrum.de/hlf/
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
Sep 24, 2018 · 253 points, 274 comments · submitted by azhenley
dmreedy
I believe these are the kinds of ideas that might actually create a genuine engineering culture in software development. Until we start applying this kind of rigor to our work, I don't believe the title of "Software Engineer" is justified. It doesn't have to be TLA+; it doesn't have to be any particular tool or technology or pattern or whatever. But the attitude that rigor and formal technique is worth the additional effort is something that I don't find often in the current culture, or at least the narrow slice of it that I've experienced.

The quote at the end sums it up nicely. Coding should be the easy part. A sentiment everyone tends to nod their head along with, but rarely has the discipline and willpower to implement.

Nursie
>the attitude that rigor and formal technique is worth the additional effort

Commercially speaking - it's usually not.

pjmlp
It will be, we just need a couple more lawsuits about faulty software and security exploits.
Nursie
How long has this sort of stuff been said though?
pjmlp
Not long enough, but IoT mass deployments are going to change that, as they are much worse than Windows 9X security issues.
Nursie
I'm not sure that we'll end up with more formal proofs as a result of this. But better locked down systems would be a good thing in this space.
tormeh
Totes agree. Also, customers are humans. They don't want to be handed a perfect product. They want to be involved in the process. Make dumb decisions. Complain. Receive bugfixes. Repeat.

I don't know. Maybe I'm jaded, but I think in B2B, good software is not even desired. In B2C, sure, as Apple proves, but not in B2B.

Nursie
> good software is not even desired.

You can make good software without formal proofs. It just may not be proven to be perfect.

> They want to be involved in the process. Make dumb decisions.

This I agree with though, sure. They do make dumb decisions, and they push ahead anyway, even when told. Later they change their minds, we change the software. It's the grand circle of life or something.

im_down_w_otp
Does this speak more to the pursuit of the application of software to problems generally or to the baseline triviality of problems that are funded to have software applied to them?

To me it seems like an incentive problem that starts with the distorting effects of unsophisticated investors w/ unrealisitic expectations who fund following a herd mentality only things they can themselves pattern-match to.

Nursie
To me it speaks of Software being "good enough" far more easily than it can be perfect.

> To me it seems like an incentive problem

Why is it a problem if a buyer gets something that works well enough for a lot less time/money put in?

Further to that - specifications and requirements are often imperfect, poorly pinned down, ever-changing. Software creation out in the commercial world has shifted working patterns to try to adapt to that (Agile methods amongst others). Attempting formal proof of such soft, malleable systems would appear to be a fool's errand.

im_down_w_otp
I'm not sure I follow.

The challenge with this formulation of "ever-changing" is that it assumes that things like Agile methods are somehow creating the capacity to iterate toward something of value with intention and ambition in such a way that you'll achieve whatever it is that you're trying to achieve.

However, what happens is that the lack of rigor in software development today seems to cause the lineage of development and associated created technology to drift further and further away from the desired commercial goal as those goals change or mutate.

Creating a bunch of brittle technology and technical debt to poorly approximate Purpose-A (a purpose by the way that you might be able to formally prove trivially is actually impossible and you either shouldn't waste the money or should intentionally select the best sub-optimal solution) doesn't help you get to Purpose-B any faster. As you become aware that Purpose-B is a thing you need to angle toward, you're still dealing with the pile of bubblegum, toothpicks, and instruction manuals written in alien script that still hasn't achieved Purpose-A.

It's a really weird thing to me that as an entire ecosystem and community we've decided that the most valuable way to approach software is to declare that it's impossible to ever "finish" it.

If I have a series of high-assurance layers to build upon that I know I can trust, then iterating and adapting to changes actually becomes faster and easier, because I know the surface area of where a problem can actually reside and how deep in any part of a system I actually have to uproot and start over. In the current formulation of the world, basically everything is a suspect, and basically any solution is considered viable... whether it truly is or not.

I've been in situations where Product Managers and Engineers spent months off and on arguing over some proposed solution or feature and never getting anywhere because everybody's basis was built on social capital, political capital, and mythology. What finally killed the conversation and allowed progress to actually happen? When a TLA+ model that invalidated one of the proposed hypotheses was presented. "Well, look. The way you want to do this is actually impossible. It will need to be done like this, this, or this instead." Seven... _seven_... sprints of about a dozen people wasted because nobody could formalize either the requirement or the approach. Until somebody did. This wasn't for a safety-critical system either.

Nursie
> The challenge with this formulation of "ever-changing" is that it assumes that things like Agile methods are somehow creating the capacity to iterate toward something of value

If you're doing it reasonably well, they produce small chunks of something valuable every iteration.

> Creating a bunch of brittle technology and technical debt

Who's creating brittle software or technical debt? Things can be well designed and well written, well structured and well tested without going as far as formal proof, which in most commercial situations is going to be masses of overhead and quickly not applicable to the ever-shifting problem domain.

> It's a really weird thing to me that as an entire ecosystem and community we've decided that the most valuable way to approach software is to declare that it's impossible to ever "finish" it.

You seriously need to get out into the commercial world more. Or perhaps into the smaller, faster moving commercial world (I'm not sure where you are now).

> If I have a series of high-assurance layers to build upon that I know I can trust,

Then the underlying assumptions are likely to change, rendering your layers an expensive boondoggle.

> I've been in situations where Product Managers and Engineers spent months off and on arguing over some proposed solution

Then you have bigger problems than formal proof.

I'm generally looking at things which are designed and delivered within a number of months - anything that takes months to argue over has entirely missed the boat as far as I'm concerned.

__jal
It speaks to economics.

A business process that works "by itself" most of the time, but requires human intervention once in a while, is still a functional business process. The unit of "software + 5% of this person's time" is perfectly workable. And if the cost of the fix is more than the NPV of 5% of that person's time, it isn't worth making.

Catastrophic failure risk, of course, changes the analysis. But a huge range of things are not catastrophic in their failure modes.

thwy12321
Coding is completely different from engineering, it will never be engineering, and it will never be the easy part. Engineering is a solved problem because we keep building the same buildings and the same bridges over and over. Its like deploying the same API over and over but only changing a few lines in a very rigorous way. If anything, engineering is not engineering and coding is engineering. Coding is actually creating. There is no strong hardened fool proof way to do it, theres simply too much complexity, too many possibilities for what could be.
batteryhorse
I disagree that engineering is the best approach to writing software. I think this is the fundamental difference between people who prefer static vs dynamic typing. People who prefer static typing are more likely to say that one solution is more "correct" than the other, whereas dynamic typing allows more flexibility.

I see writing software to be similar to writing music. Music has rules and structure, but leaves room for creativity and allows two or more highly skilled musicians to be able to play music together and improvise.

Static typing imposes limitations (kind of like government), whereas dynamic typing gives you more freedom (but requires responsibility)

eponeponepon
You're conflating the act of building a product with the act of writing code. Even in the most expressive of languages, there are objectively good and bad ways of instructing your hardware to perform a particular calculation or set of calculations - and in the more rigid languages there is often a flat-out right and wrong way.

Design your product with as much free thought and creativity as you please, but never let yourself believe your actual code is art. That - among many other outcomes - is how your platform gets compromised and your customers' identities stolen - not to mention how the successors to your job end up cursing your name and wasting time and money replacing your code wholesale.

maltalex
> I see writing software to be similar to writing music. Music has rules and structure, but leaves room for creativity and allows two or more highly skilled musicians to be able to play music together and improvise.

Bad music carries only the risk of people not liking it. Bad code on the other hand...

wolco
Usually risks people not using it or playing your game.

Rarely does it kill and in those cases (self driving cars, banking) those application usually has a company behind them.

munk-a
If the RSS reader you're building has no guarantees of working it has no value to anyone. Not all software needs to try and be bug free but all software should strive to minimize the number of bugs in it.

Static typing helps to minimize bugs (when used correctly) by allowing function contract declarations and reducing the build up of defensive guards over time. I've seen code in dynamically typed languages with lines like `if (!foo is integer) raise;` which basically equate to onerous home-built static typing.

wolco
If it has no guarantee and it works it still has value for whoever uses it. If it didn't work it might have less value but still has some value.

Why would guarantee free software have no value. It's like buying a used phone.. no guarantee but it still has value.

darkerside
Parent isn't saying his way is best. He's saying it's the appropriate mindset for some types of software. What you describe might make sense for bespoke web apps (and many other applications), but it may not be appropriate for, say, firmware development, or writing a web server from scratch.
MaxBarraclough
That might work fine if you're throwing together a blogging system, but not in a critical-systems software-engineering context. When the stakes are high, it doesn't do to view your code as a creative outlet.

Critical-systems software engineering is not about your creativity, it's about building maximally correct software, to minimise the probability of a catastrophic outcome.

This idea that it's inherently virtuous to let software developers be creative, strikes me as quite wrong-headed. If a strict methodology produces better results (and it does), a strict methodology should be adopted.

Bridge engineers don't whine about their constrained creativity. They're too busy ensuring their creations don't get anyone killed.

With the curious exception of Erlang et al, few people serious about program correctness favour dynamically typed languages. There's a reason why Ada is statically typed, and strongly typed (it lacks the dangerous implicit type conversions permitted by C).

An excellent article on this kind of thing: https://www.fastcompany.com/28121/they-write-right-stuff

vmchale
> Static typing imposes limitations (kind of like government),

where to begin

MrEfficiency
I program as you say 'similar to writing music'.

This is because given all of the levels of various languages, frameworks, and libraries- I'm just trying to get things to work.

However, I would very much like the methodical engineering approach at my day job applied to programming.

It really blows my mind we don't have full stack + auth that works cross platform out-of-the-box. It blows my mind my recent framework didnt show the post request exchange to get tokens and authenticate.

There are NOT that many people that program at this level. It seems most of my programming questions are answered by asking experts rather than reading a wikipedia article. Electrical and Mechanical Engineering are better solved reading rather than asking engineers. IMO.

emodendroket
But it's clear that one approach can be more correct than another. If it was supposed to save a document and instead it deletes all my documents, it's not right.
None
None
GenericsMotors
I hope you stick to webdevelopment and never touch anything remotely related to medical devices, or control systems.
bbeonx
So you're saying Python devs are all libertarians? :D

<braindump>

I agree that coding/development is an art, but I also am hard pressed to find anything that, when done well, isn't an art.

The difference is that music doesn't have any really notion of correctness or efficiency; sure, there are ideas such as "no parallel fifths" and "tritones must resolve to M3 or m6", but this isn't an issue of correctness, but rather stylistic guidelines. Schoenberg's compositions weren't incorrect in any universal sense.

Programs, on the other hand, can be incorrect or inefficient. This is because most programs have a concrete function that they produce.

Dynamic types are great for exploring your solution space. I can quickly whip together a prototype, try out different solutions, etc. I don't need to design my entire program ahead of time, and this is nice. However this does not prevent me from reasoning formally about my program. Once I've muddled around a bit and built up some scaffolding, it very well may behoove me to try to shore up my design, spend some time reasoning and designing a formal algorithm/specification/etc, etc.

And for the record, as a musician I definitely think formally about my compositions/pieces I'm playing. Granted, being a guitarist this usually doesn't involve me writing out my pieces with sheet music, but there is still a blend of intuition/exploration and brute computation.

As an example, I often times have two disparate parts that I need to connect together, maybe by modulating, maybe by changing registers, or maybe by altering my time signature. At that point I put the guitar aside for a moment and think about my formal requirements, and then try to find some 'convincing' way to get from point A to point B. Maybe I have some constraints (I have two measures, I need to start in this position and end in that position, this finger needs to pedal this note, etc). This quickly becomes a math problem where only a small set of viable solutions exist. After 'doing the math' I am left to evaluate my solutions aesthetic quality which, I suppose, is where most people think of the 'artistic' bit comes into play. But really, I'd say artistry abounds throughout the process.

And to be clear, I'm not thinking about this stuff as doing math as I'm doing it--I have my own system for thinking about this stuff that doesn't look like math.

Anyway, all of this is to say that formalism can be used even in music with very real benefits.

</brain-dump>

bilg21
Isn't that why you have titles such as Software Engineer, Developer, Programmer? Software Engineer should have the engineering background and tackles hard problems whereas Developers and Programmers focus on products and features.
azhenley
All of these terms are used mostly interchangeably, and it just depends on what company you are at. For example, Microsoft calls them all SDEs (Software Development Engineer) and Google calls them Software Engineers.
pjmlp
In many countries it is illegal to do so unless those people are actual engineers with accredited degree by the engineers' college.
TeMPOraL
It's fine when we're talking about leisure-paced art, done by people who deeply care about what they do. But the reality of software development is different. Software is written primarily by people who treat it as a job, and care about it (on average) about as much as they're paid to - and they're being paid by people who don't care about the product, but its ability to make money. Software written in such environment is (unlike music) powering systems that affects livelihoods, health and safety of other people. With the market pushing quality down, we need a countering force pushing it up.

Or, imagine dropping all regulations around planes, cars, food, health care or infrastructure. Would things be handled better by artists, free to pursue quality? Maybe. Will majority of the providers be artists, or even care about quality? No.

gota
Just a sidenote, reading your comment it occurs to me that I have no idea - and as far as I can tell never read anything about - what is the proportion of code that is done 'for fun' versus 'for pay'.

If anyone has any survey or any discussion on that please reply so I can take a look at it. I'd like to confront my expectations with data

pron
I think that tools such as TLA+ show their value not by convincing people that they're worth the extra effort but that they actually save you effort. The experience at Amazon and Microsoft shows exactly that, and managers were relatively easily convinced.
im_down_w_otp
Absolutely agree. I work in the autonomous vehicle space, specifically in creating high-assurance resilient systems, and I've had more arguments that I care to count which went something like the following:

Them, "All that extra effort sounds great, but we don't have the time to do that. It will explode the really tight build, test, debug cycle we have now. Suddenly every cycle will be 100x as long."

Me, "First of all, you should be less concerned about the length of an individual cycle and more concerned about the sum of all the cycles up to completion."

Them, "Okay, but having short cycles also helps us course correct quickly, and keeps our 'velocity' up."

Me, "Well, second of all, what good is it to be able to iterate really quickly toward an unspecified goal? How do you know that what you're doing will get you there? Or worse, how do you know that what you're trying to do is even possible or not? Maybe you're 'engineering' your way asymptotically toward an impossibilty result."

It's sometimes crazy making to have these conversations and see how wildly distorted we've allowed our incentives and perceptions become as a group of professionals. It's become more important to produce the appearance of progress and effort than it is to actually make progress and thoughtfully apply effort.

dchichkov
The problem is, most of the people who you are working with, only have 0-2 years of experience in the technology that they are using (use of research / deep learning to build products). Look at their resumes. They'd be either recent PhD grads, who haven't built a single product in their life. Or traditional engineers who had just joined the field of building products with an approach that is very different from one that they are experienced in.

I would also caution you. Do you really know the technology that you are trying to make safe? Building systems with research / deep learning is as different from traditional software development as synthetic biology from electrical engineering.

im_down_w_otp
That's an institutional problem though.

Yes, it's true that most companies hire way more people, way too quickly than is probably prudent for a problem domain (especially one that is still such an open-ended science project like AVs), and that as a result barely anybody has any idea what they're doing measured against what's being expected of them, but that's a wound that is entirely self-inflicted. It's not a natural law that one needs to suddenly staff up dozens or hundreds of really expensive people ahead of any real capacity to train or assimilate them, but it does happen with depressing regularity. If anything, I'd say this is probably the biggest problem I see in the ecosystem that you and I are in.

The team I work with is small. Everyone has several (5-15) years of experience consuming research, turning it into practical solutions, and in some cases even having created a novel practical application that was later supported by adjacent research.

It's not easy to construct a team of half-dozen to a dozen prolific programmers w/ a bent toward the academic, but an innate sensibility for remaining pragmatic and applicable. But, it's also not impossible to do so either. This is inline with my incentives comments elsewhere. I think the incentives are broken at too many layers to be able to be fastidious and intentional about how we solve problems. Say you get an "empire builder" into the leadership structure... you're doomed. The whole point of everything becomes about having as big of a "rockstar" team as possible, not about trying to solve a problem. Unless the problem one is trying to solve is how to quickly become the SVP of an extremely expensive and probably fairly unproductive team of increasingly disaffected previously very effective people.

dchichkov
I'm finding a fifteen years figure hard to believe. There was a handful of people at that time that have invented deep learning. It took years to get any industry use, as the technology was not yet competitive, until AlexNet.
im_down_w_otp
We don't work in AI/ML, we work on high-assurance distributed systems.

That said, you'd be right. Not 15-years in something like computer vision. No such thing as formally verified NNs, regardless of what nuTonomy claims, exist. Even the basis about what one would be trying to verify is an open question. It's partially the point of DARPA's Assured Autonomy program.

The problem at the moment is that the ecosystem has two science projects on its hands. How to make an ML-derived driver is one. How to make a stable, correct substrate for acquiring data, make a decision, and react correctly is another. The only one most people hire up for is the former, and the latter is at least as challenging.

dchichkov
Isn't it a "high-assurance distributed AI/ML system"? It is interesting that there is indeed a tendency to factorize it. I believe, this tendency arises because there are two groups of people that are available - traditional engineers, that are excellent and experienced, but do not know the language of AI and can't tell an ensemble from an embedding. And AI engineers, who are yet to build their first product.
im_down_w_otp
Sort of. Except I don't think there's a such thing as "high-assurance AI/ML". Put simply for the audience here. The goal is to have the whole stack from top to bottom be as predictable and deterministic as possible.

There's a layer of it (the AI/ML bits) which is necessarily probabilistic. That's by design. It's how it all works. All the layers below that today are currently terrifyingly probabilistic. It's difficult to advance the capability of the whole domain when the necessarily probabilistic parts are depending on shifting sands as a stable foundation.

I agree though that there's a large gap there in the domain expertise of the parties involved, and that's produced significant challenges as we've not yet figured out how to close that gap.

IX-103
So, two questions:

1. What parts of AI/ML have to be nondeterministic?

Once the model is trained then it should be completely deterministic. There are some training techniques that randomize the order or grouping of data, but they are primarily used because random is almost as good as finding the optimal ordering/grouping and finding the optimal ordering/grouping is hard. Sometimes randomness is used to augment the data to give you more examples to for against (because the model is too complex to train/fit with the data you have it you want to be robust to certain variations in input. If you're talking outputs then you should get the same outputs from a model given the same inputs. There are some really bad models out there that are way over fitted (relevant xkcd - https://m.xkcd.com/2048/ see "house of cards") and some people working on ML have a terrible grasp of statical techniques (just try models until they find one that fits), but even these models are deterministic.

What is so bad about nondeterminism anyway? Ethernet used nondeterminism to handle collision resolution. There best prime finding algorithms are nondeterministic. There is nothing wrong with appropriate nondeterminism.

dchichkov
Are you sure, such thing as "high-assurance AI/ML" doesn't exist? Are you an expert in AI, with many years of experience building products with AI technology?
im_down_w_otp
I'm not an expert in AI, but as you can imagine I have to interact with a lot of people who are. What I do know is what it means to be able to do end-to-end verification and it seems fairly antithetical to the basis of machine learning which has an open-ended failure domain. If you could write an AI as a total program, it would cease to be probabilistic. It would become propositional, but would also seem to cease to be writable and computationally tractable. As it would need to encode all possible states of all possible features of all relevant domains and all possible things that can interact with each of those things and what their effects would be.

Though, I'm happy to be proven wrong here as it would make me a lot less nervous about the future were there a way to legitimately assure the artificial driver in an AV won't fail in emergent ways, or at all.

dchichkov
"What foolishness! What nonsense! .... For as children tremble and fear everything in the blind darkness, so we in the light sometimes fear what is no more to be feared than the things children in the dark hold in terror and imagine will come true. This terror therefore and darkness of mind must be dispelled not by the rays of the sun and glittering shafts of day, but by the aspect and law of nature."

LUCRETIUS, De Rerum Natura

:)

ndpi
I work on the formal verification of AI systems as well, but focusing on model-based reasoning systems rather than the model-free ML systems. What you've described is exactly what terrifies me most about ML in safety critical contexts: it can fail in a large number of ways, and predicting these ahead of time is generally intractable.

I like to think of it in terms of quantifiers: for most systems it's fine to say that there exists a set of representative tests that the system passes. For critical systems, you really want "for all" in there instead.

Fortunately the model-based systems are much easier to reason about, even if they suffer from difficulty modeling certain cases. For a model-free system I think the starting point would be to precisely state a correctness criterion that could be used as a decision procedure on the output of the model; the combined system would be "verified" to a certain degree. Unfortunately you can't guarantee that it would always be able to yield an answer (it might always give a bad one which was rejected) and, the more fundamental problem, one might reasonably expect that if you can express the correctness property to a sufficient level of detail that you can perform verification on it you might be able to write an easier-to-verify implementation that doesn't rely on ML.

This stuff is hard, yo.

None
None
ajmurmann
I think your frustration largely arises because the problem you work on is uncommon. The goals you are working towards are very large and error is not an option because people die or at least massive property damage occurs. Most applications aren't like that. It doesn't matter much if 1/10k image upload to Instagram fail. We have a lot more Instagrams than autonomous cars so the engineering culture conversion is largely dominated by those trade offs. It's interesting because I think we see experiencing the pendulum having swung heavily towards Agile. In the past we used to slow down some crud apps by first drawing UML diagrams. Fingers crossed that we can discover a wider, more differentiated range of engineering practices.
im_down_w_otp
My particular vein of work certainly makes me more likely to have interacted with this different way of doing things, but I don't think at all that it's a necessary component to there being reasons to consider it and reap the benefits of it. Because I didn't always used to be in this vein of work, and in retrospect I would have benefited enormously from being more judicious and fastidious in my prior lives using things like TLA+ or requiring even super basic things from my co-workers/teammates like property-based tests. I would have saved so much time and money and heartache over the years.

It can't possibly be more efficient to take 50 fairly expensive people and have them iterate 10,000 times toward something that still ends up being fairly broken---and requires maintenance in perpetuity---than it is to take 6 slightly more expensive people, and have them iterate 10 times toward something that does exactly and only what it is supposed to do and requires basically no attention once it's been instantiated, can it? Even if you're Instagram.

badpun
> It's become more important to produce the appearance of progress and effort than it is to actually make progress and thoughtfully apply effort.

That's why I'm sceptical about going to work on the autonomous vehicles tech (although it is tempting). I'm affraid there's been so much money and hype around it that "failure is not an option" - i.e. companies will sooner see another Challenger disaster than admit that they're not on track for their announced goals.

im_down_w_otp
Based on your handle and your concerns, you should email me. :-)
badpun
I absolutely would, but how do I reach you? :) There's no contact info in your profile.
im_down_w_otp
[email protected] is a reasonable way to reach me.
im_down_w_otp
^^^ I'm quite serious.
arithma
Either your frustration is great, or you have the mind of a jedi. This is without a doubt the most scathing review I've read of agile, and you don't seem to even be trying.
None
None
jventura
I don't know who the target audience was, but I found the lecture very confusing and the message really badly passed (he keeps talking what is already written in the slides, etc.). I had to play it at 1.6x the regular speed so I could move on with it.

Regarding the topic, I think there are multiple ways of expressing ideas, where math happens to be just one of them. For instance, for the Greatest Common Divisor example, you could just say that "if x equals y the cgd of x and y is x, else you need to keep subtracting the smallest number from the largest until both are equal".

You could use simple math, such as:

             / x if x=y
  gcd(x,y) = | gcd(x-y, y) if x > y
             \ gcd(x, y-x) if x < y
Or you could use code:

  def gcd(x,y):
      if x==y:
          return x
      elif x>y:
          return gcd(x-y, y)
      else:
          return gcd(x, y-x)
In essence, what I want to say is that his definition of beauty and power may not match mine, but I will not argue that math can be more succinct or allow for better abstractions in many cases.

In the end, I got the feeling that he is trying to "sell" us his TLA+ approach.

Apaec
Code doesn't have to look like that, look at the Haskell version:

  gcd x y | x == y = x 
          | x > y  = gcd (x - y) y 
          | otherwise = gcd x (y - x)
Much more close to the math notation.
None
None
maccam912
I think he was trying to show that the TLA+ approach led to easier proofs of the base case and inductive case. I don't think he's arguing that your gcd program is bad. That's what I'd use. But TLA+ might be a good tool to break out when you need to have a solid proof that the code you wrote will always end up getting you the right answer. We probably don't need it for side projects, but it's nice to know that AWS has some proofs around that the services I use there can't end up in an inconsistent state as a result of any coding logic.
georgewsinger
TLA+ vs. Functional Languages. I once tried to understand how TLA+ could be useful if your implementation language was primarily functional (i.e., Haskell). I wrote to Leslie Lamport on his mailing list and he actually responded with what I think was a very informative dialogue: https://groups.google.com/forum/#!searchin/tlaplus/George$20...

I hope TLA+ gains more popularity. It's an interesting technology.

heydenberk
You should submit this thread to HN!
nkoren
Meta-comment: if you're producing a video of a presentation with lots of equations on the slides, PLEASE make the slides larger than a postage-stamp. I wanted to be able to follow this in detail, but was unable to do so because the slides were illegible, and following them verbally was something that even tripped up the presenter.
mickeyr
There's a better version here:

https://hitsmediaweb.h-its.org/Mediasite/Play/abdee6c8696a48...

jononor
I think we need reusable (formal) specifications for it to become adapted widely. If that could be done in a way similar to how we today use open source libraries of reusable code, findable via a package manager, it might fly. Writing formal specs from scratch each and every time is too time consuming relative to the gain, except for in a few critical usecases.
settler4
And then find a way to run that and so turn it into a programming language?
jononor
No we already have enough of those. But one could implement a way of annotating that some code/package (in ordinary programming language) implements a certain specification. This metadata should be searchable through the package manager.
emodendroket
What, like WCF?
pron
If you use TLA+ you'll see why this is likely unnecessary. A huge specification, made at three levels of detail, can run to 2500 LOC. Because you have so much control over the level of detail, and because the language is so expressive, there is little need for reuse (except maybe for some common definitions, e.g. of different consistency levels, which are very short but can be non-trivial).
jononor
How do you mean unnecessary? Surely writing such a specification takes many man hours, even after having spent many man hours to learn TLA+? Can none of this effort be encapsulated into something reusable?

The desire for reuse is not driven by a want to reduce LOC, but hours spent.

pron
Most of the effort writing TLA+ is about the what not the how. In general, we know how to make code reuse effective when a lot of implementation hides behind a thin API, but in TLA+ there is no implementation in a meaningful sense. A library of requirements would make about as much sense, and while maybe it's something that could be done, I'm not sure we know how to do it (but if you've seen something like it, I'd love to see it). So maybe it's an interesting idea, but I'm not sure how it would be done. Aside from a utility module that I like including in most of my specs, I never "reuse" modules as is. I often do, however, copy-paste-modify pieces of other specifications, which, I assume, is not unlike how lawyers reuse portions of contracts.

I guess there are occasionally pieces that could be used as-is (e.g. a description of a lossy network), but those tend to be so simple, that looking for them in some Maven-central for spec modules would probably more effort than writing them from scratch every time.

rixarn
I'm more with Mike Acton in this respect; the hardware is the platform, not the software. And in that sense... In order to really make good use of your hardware you need to understand your hardware and code accordingly. I doubt you can achieve great performance on a language that focuses entirely on mathematical representations without considering the hardware.

So yeah... you end up with a beautiful mathematical expression ... but runs way slower than an "ugly" equivalent in c code.

Jtsummers
The problem with being too close to your hardware is that you become non-portable, or you create a situation where a CPU for a word processor becomes the dominant platform for decades.

The goal should be somewhere between the two levels. Most code doesn't need to be as efficient as hand-tuned assembly. And most languages aren't so bad that they create many orders of magnitude difference between that hand-tuned assembly and themselves in performance. C may keep you within the same order of magnitude as that hand-tuned assembly. But ocaml can as well, and offers much cleaner expressions of many algorithms and ideas (closer to their specification concept than their machine code run time).

When one language becomes insufficient, accept polyglots. If Erlang's mathematical computations are too slow, create a C program that can be loaded in, or that calls out to OpenCL if need be. But Erlang's ability to express concurrency is among the best, and C's is among the worst. So why use one of the worst languages for concurrency when you can use one of the best, and restrict yourself to C code where needed?

The same consideration should be applied to other problem areas. Logic languages and constraint programming can solve many computational problems, and often very efficiently. But both are light years away from the underlying machine they're running on. Should they be discarded so we can hand write and hand tune a constraint solver in C every time?

While it's wasteful to rewrite everything 10 levels above your hardware (an interpreted language, running on a vm, running on a vm, until eventually you find the CPU), it's hardly appropriate to drop everything and insist on the lowest level (or near lowest level) language around. Instead, drop those inefficient languages that barely add anything of value (no novel capabilities, no better safety guarantees), keep the ones that are reasonably efficient or maybe inefficient but offer some true utility. Then code in the highest level language that's reasonable, moving down the stack only when needed for performance (or other) reasons.

rixarn
I agree with some of the things you say... mostly with the "use the right tool for the right job". In that sense, C might not be the most sensible choice for some things where you can end up with a solution that is twice as slow or more (and it's acceptable) but a lot easier to code and maintain. But if it several orders of magnitude slower? ... I just used C as an example of something that is obvious to reason about when comparing the alternative.

Portability comes with a price. And the more you abstract away from hardware the less you will be able to use it to its full potential. Most of the code written in these times it's not designed around things like cache locality for example and we have code that is 100 times as slow as they should be... even in plain C!

That said, we can agree that right now (more or less) you can reason about cache locality (for example) in "traditional" languages (even in Javascript... to some extent) and keep your distance from bare metal. But you lose the ability to reason about that and other things by using a PL so focused in writing beautiful mathematical expressions. For me, that's a deal breaker.

cttet
You can do both.. But it is hard.. https://www.youtube.com/watch?v=zt0OQb1DBko
rixarn
Interesting! Thanks for sharing :) ATS seems to be a little bit far from Lamport because you do have data types and other stuff, but yeah.. maybe this is as close as you can get.
thrmsforbfast
> In order to really make good use of your hardware you need to understand your hardware and code accordingly

Understanding your hardware is only half the story. To use the hardware effectively, you also need to have mathemaitcal insights into your problem.

A rather extreme proof of this fact is bubble_sort.c vs. quick_sort.java. I think it's clear which of those is using the machine more effectively...

> I doubt you can achieve great performance on a language that focuses entirely on mathematical representations without considering the hardware.

Plenty of very high level languages have extremely effective compilers. SQL query engines come to mind.

> So yeah... you end up with a beautiful mathematical expression ... but runs way slower than an "ugly" equivalent in c code.

TLA+ et al. are about primarily about correctness and designed for situations where correctness is hard to get. Beauty is relevant only in-so-far as it helps one the path to correctness.

But yes, hand-craft code will get you your GIGO much faster :)

rixarn
I agree, you need mathematical insight too you can have that and still code as we do now without sacrificing a way to express solutions that consider things such as cache locality. With what he proposes, that’s simply not possible. There’s a very interesting talk by scott meyers about how important cache locallity is to the point where, in practice, for some problems, big oh analysis breaks because of cache misses. So an algorithm that is faster by those standars gets its ass kicked by a “slower” one just because of cache misses.
thrmsforbfast
> With what he proposes, that’s simply not possible

There's nothing special about the cache hierarchy. It's just one more thing that can be reasoned about using mathematics. If you care about cache locality, you can write down the semantics of an optimizing compiler and prove that this compiler uses the cache hierarchy in an optimal way. Why would that be impossible?

Not only is it possible, but it might even be easier in high-level languages. For example, the argument has even been made that the importance of cache locality is really an argument in favor of higher-level functional languages.

But again, with respect to TLA+, this is all a red herring and completely misses the point. The more important point re: TLA+ is the GIGO/correctness problem. I don't give a damn how cache-efficient my algorithm is if it's not even computing the correct results!

moron4hire
I think he makes an assumption that is fundamentally incorrect in the vast majority of cases: that the people owning the programs care about specification.

There is a huge, dark-matter body of software development that goes on in the world under the thumb of megalomaniacs whose only metric of productivity is how much unpaid overtime his programmers are working.

lake99
Looks like an interesting talk, but are the slides available online somewhere?

Edit: Never mind. I found the slides he had made for the same talk given elsewhere. https://drive.google.com/open?id=1sZciK13LDh4rBFwbRCk0Oylga3...

mad44
A summary of the article the talk was based on and some discussion on this. http://muratbuffalo.blogspot.com/2018/07/if-youre-not-writin...
jkingsbery
"It's quite likely that during their lifetime, machine learning will completely change the nature of programming."

- Why is this the case? Even if it's true, it hardly seems obviously true.

walshemj
I recall hearing some thing similar around 1980 :-)
about_help
Because soon you will just ask your computer to build your code after giving it a few parameters. Then you will proof read and sanity check the code and spend more time optimizing and working on the really hard problems.
Retra
How soon? I don't know of any software I've written for which that could be done. The first program I ever wrote was an implementation of Tic-Tac-Toe, and I can't imagine a set of parameters to produce that which aren't specifically for making games of that nature while also being easier to select than writing the code myself. I'd bet such a system is impossible in general, and that that should be the obvious conclusion.

Now, machine learning to find bugs or adjust variables... yeah. I can see that.

None
None
boomlinde
That is already how you program Prolog, no proof reading or machine learning required. I also think that specifying the problem is one of the harder aspects of software development.

On the contrary, I think machine learning might be more immediately useful for things like low-level optimization.

rileymat2
Right now source control applications don’t even detect simple merge conflicts based on meaning.

For example if I remove a global variable from a source file and another branch adds another reference to it, if it does not occur on the same line git will happily merge the file with no conflict.

wellpast
This still hardly seems obvious.
xamuel
Assuming you could concretely nail down all the vague terms you just used, I'd be willing to bet real money we won't have anything remotely like that within the next 10 years :)
err4nt
I think a lot of people who suggest things like this use the image of a computer writing code, or generating code in a programming language that didn't exist, when I think in reality what's more likely to be implemented is something more like Automator for Mac OS: https://www.youtube.com/watch?v=Z56PBanBSx4

In Automator, pre-made blocks of code that do different things (filesystem manipulation, data transformations, application control, machine control) can be assembled into a workflow, and configured with options. It is possible that something like that could be given a speech-based UI (rather than keyboard and mouse) and the result is that pre-existing code is assembled and configured. That's a lot more realistic.

aidenn0
I haven't watched the full talk, so let me know if this gets explored by Lamport, but...

Here's a thought: PL researchers seem to generally agree that typed languages are superior to untyped languages, yet programmers tend to prefer untyped languages to typed languages, to the point where Java and C++ have the fanciest type systems in common use, with ML being the closest thing to an academic language that gets significant use.

It seems intuitive to me that typed languages are bad for exploring algorithmic design, so could this where the disconnect is? Most programmers do not know what algorithm their final product is going to have when they start and dynamic languages are much more ergonomic when you are exploring he design space.

Certainly if I knew exactly what program I was going to write when I started, I could probably write it in haskell; indeed once I have a working program, translating it to haskell is often fairly mechanical. However much of my time is spent trialing algorithms that may or may not be a good fit for the problem I'm solving and in those cases I feel like haskell gets in my way, while lisp gets out of my way.

I don't know if there is any practical takeaway from this, but it does seem to at least give an explanation for the divide between PL theory and practice.

simias
>yet programmers tend to prefer untyped languages to typed languages

The more strongly typed a language is the more it forces you to think about the big picture and architecture your entire application in a way that will map nicely to the type system. For instance with a language like Rust which is both strongly typed and integrates concepts like lifetimes into the type system you need to think long and hard early on about how your "data" and your "processes" are going to interact to figure out a way to map everything cleanly.

Doing this correctly requires a lot of experience. You need to be able to project quite accurately what your project is going to look like (lest you end up to refactoring everything a few weeks from now or worse, lose a lot of time fighting the type system instead of leaning onto it) and you need a deep understanding of your tools to figure out what's the best way to model it and the tradeoffs involved.

If you're a beginner it can be seen as overwhelming and maybe even counter-productive, let me code and stop bothering me about integer to string conversions!

At this point I'm tempted to draw the conclusion that "dynamic languages are for n00bs and static typing is for the 31337" but my self-awareness douche alarm is starting to beep very hard so I'm going to deviate and concede that many brilliant programmers seem to favor dynamic languages so clearly some strive in this environment. So instead I'm going to draw a weaker conclusion: while competent programmers may or may not prefer dynamic languages, beginners will almost certainly favor the more forgiving nature of dynamically typed languages. That biases towards dynamic languages in the general population (and especially the population who cares about engaging in language wars in the first place. Us experienced coders know that the only worthwhile debate is tabs vs. spaces. It's spaces by the way).

hhmc
> The more strongly typed a language is the more it forces you to think about the big picture

This is a good point. A straightforward corollary is that the size of the picture can be drastically different for beginners and experts.

One would imagine that beginners work on more small toy projects than experts. In such cases the protections that static languages offer can seem less valuable, and the friction more painful.

veebat
I think most of the dynamic/static bias depends on the specific experiences of your problem domain.

If you have a lot of types of data to model(layers of records, sequences, indirection, etc), dynamic types make it easy to start bodging something together, but result in write-only code. So you end up wanting to have additional structure and definition - maybe not immediately, but just after prototyping is done and you have a first working example.

If you're just applying a routine algorithm that ultimately does a round trip from SQL, you don't need that additional assurance. The database is already doing the important part of the work.

If you have simple data but it needs to go as fast as possible, you end up wanting to work at a low level, and then the machine size of the data becomes important - so you end up with a static types bias.

btilly
I believe that Steve Yegge got it right in https://plus.google.com/110981030061712822816/posts/KaSKeg4v... and it is all about your political leaning. Or more precisely, it is about how much ceremony you are willing to go through for some promised benefit. If you're willing to go through a lot of ceremony, you're a software conservative. If you don't see a point to the ceremony (either because it gets in your way, or you don't believe that the promised benefit is no that big), then you're a software liberal. Which you are likely to be is a combination of a function of your personality, and what kind of software you write.

It really isn't about competence. Competent programmers can fall into either camp. However there is a strong tendency for people to conclude that people who disagree with them are incompetent. As you just did. And then backed up to realize that there are good people who disagree with you.

I would also disagree with your weaker conclusion. In general bad programmers agree with whatever role-model they have most recently imprinted on. If they've picked up a Java book that talks about typing, they assume that it is right and types are worthwhile. If they pick up a PHP book, then vice versa. The current popularity of Javascript and Python mean that a lot are going to be on the dynamic side, but far from all.

UK-Al05
Most of the oldies I know prefer static typing.

A lot of the new people use dynamic types because that's all they know.

travisoneill1
True. I started coding a couple years ago and used a static typed language for the first time a couple months ago. Definitely don't want to go back.
aidenn0
Care to define "old"? I'm 38, and the programmers I know that are older than me don't seem to have a significantly different spread compared to younger (excluding 1-language programmers in both cases).

Maybe a bit less python and a bit more of one of: perl/awk/apl/smalltalk but still they seem to choose untyped languages over typed languages when they are unconstrained to pick the language.

maxxxxx
Only until they have to refactor an older codebase of some size...
RandomInteger4
They who? Old or new?
maxxxxx
"A lot of the new people"
RandomInteger4
Ah, thank you for disambiguating the pronoun 'they' whose reference wasn't clear as you provided no antecedent.

EDIT: Heh, kind of ironic really. A pronoun without an antecedent is like a variable in a dynamic programming language whose context isn't clear.

maxxxxx
I guess I am a dynamic writer but a strongly typed programmer :-)
TeMPOraL
But that's only if they also got some experience working with static typing. Without that, they won't see the difference in difficulty, as they have no reference to compare to.
maxxxxx
Feeling the pain would already be a good start :-). With a lot of Javascript projects being very short term a lot of people never get the experience of maintaining a code base for years.
fjsolwmv
That's when they become old people
vishnugupta
This. I'm an oldie and I prefer static typing. Heck, I can't even get my head around dynamically typed languages. And for any non-trivial production level software I strongly believe the choice of programming language has no bearing on the productivity. I.e., a team of Java experts will be as much productive in a Java stack as a team of python experts in a Python stack.

A big reason for the recent shift towards dynamic languages, in my opinion, is what's taught in college now a days. I believe it's mostly Python or the likes. In my college days (late 1990s) it was all statically typed languages (Fortran, Cobol, C, Pascal etc.,). So that's how my generation ended up learning to model a problem and solution in statically typed languages and natural preference towards them.

emodendroket
I agree. People obsess about the choice of programming language, insisting that months of study are needed to choose. I just don't see it except for very specialized tools or applications.

That said, I find maintaining an old, large application written in an untyped language pretty miserable.

i_made_a_booboo
I've had equal misery both typed and untyped on large, old, legacy stuff.

Maybe programming just sucks.

emodendroket
At least you don't have to read the method to figure out the input and output types. And the IDE can help you much more. I find these things big when trying to modify some massive codebase I don't know inside and out.
pron
Lamport would argue that if you're debating typed vs. untyped PLs, you're already missing the point, as all programming languages are necessarily[1] at a level that's too low for system/algorithmic thinking, and if you're working in a language appropriate to thinking about algorithms/systems, then the considerations surrounding typing are quite different from those pertaining to PLs (TLA+ happens to be untyped, but it's not a PL by any means).

[1]: Due to the requirement that they be efficiently compiled/interpreted.

323454
As I understand it, TLA may technically be untyped but that's simply because you are supposed to implement your own more granular type checking as part of your spec.
pron
It's true that TLA+ doesn't rely on types to specify properties, but it's still untyped. You could implement similar arbitrary properties in any programming language as predicates.
winstonewert
I think a lot of the disconnect comes from "static typing" meaning different things to different people. When some people say "static typing" they are thinking a language like Haskell with a powerful type system. However, when most programmers say "static typing" they are thinking a language like Java or C++ with a rather underpowered type system. Programmers who prefer dynamic typing are largely deciding that they would rather work in a language without a static type system than a crappy one.,
Koshkin
The correct distinction is static vs. dynamic typing on the one hand, and strong vs. weak vs. no typing on the other.
marcosdumay
I think the point is way more complex than this. Some type systems are more expressive than others, some type systems require more ceremony than others, and some type systems are more constraining than others.

The real problem with all those discussions about types (and why I think nobody should try to pass an opinion unless they tried something at least as good as Haskell) is that the most used static languages have the least constraining, most ceremonious and least expressive systems around. Developers gain nearly nothing from their type systems, but they are incredibly demanding of upkeep.

nestorD
I find that ML (Ocaml and F#) does not get in my way (Haskell being the exception due to the absence of quick and dirty mutability) while dynamic typing (in python) does hinder me when I depend on external libraries.
vmchale
> Haskell being the exception due to the absence of quick and dirty mutability

I don't think that's "getting in your way" so much as asking you learn pure, immutable data structures. Which is good even aside concerns of laziness.

nybble41
> Haskell being the exception due to the absence of quick and dirty mutability

Haskell has ST for mutable local variables, and for the truly "quick and dirty" cases there's always unsafePerformIO. What is it you're trying to express that wouldn't be handled by either of these options?

pjmlp
Which programmers?!

With the exception of JavaScript, I only use dynamic languages for throw away scripts.

I had lots of fun with Lisp and Smalltalk, but can't imagine using them with the team sizes I usually work with.

outworlder
> I had lots of fun with Lisp and Smalltalk, but can't imagine using them with the team sizes I usually work with.

I have an impression that it correlates less with team size, but rather with average member skills.

Well-written and logically organized Lisp code shouldn't be any harder to maintain just because the team size has increased. Rather, Lisp provides too much power (as in, macro system, compiler easily available at runtime, etc). These can be easily abused and wreck a codebase.

I have chosen Golang for most new development in the group I am in, basically because it forces a straightjacket onto people, even down to things like code formatting. I got tired of asking for people to write unit tests for their Python code – the fact that people can get away with throwing Python code with zero tests into production baffles me, it's like taking C code to production without compiling it.

So at least I got a compiler enforcing some level of code correctness and some uniformity in code conventions. And the fact that noone is overriding anything they should not...

marcosdumay
> Certainly if I knew exactly what program I was going to write when I started, I could probably write it in haskell;

If you don't know what program you want from the start, it's yet another reason to write in Haskell.

Haskell programs are much easier to change than any dynamically type language. Each time you change your mind in Python, you have to basically rewrite your entire program, and if done often enough (what 2 or 3 times are already enough) you will still get a mess on the end because of all the changes.

tristramb
Should Your Specification Language Be Typed? Leslie Lamport & Lawrence Paulson, ACM ToPLaS, 21, 502-526, 1999: http://lamport.azurewebsites.net/pubs/pubs.html#lamport-type...
aasasd
Personally, while I strictly use dynamically-typed languages and wouldn't want to do all that typecast typing in Java, I still use type hints for the IDE—in PHP, Python and JS. Because hints give valuable information and enable IDE features that can be directly used by me in my work, as opposed to typecasts which make me toil for the compiler.

So for me, the divide is that the tools shouldn't make me do work that they can do themselves, namely type inference. And I'm baffled that Java people just outright decided they don't want type inference. They chose to type things that the computer can calculate for them! It makes no sense.

Granted, the usefulness of the work I still choose to do scales with the longevity and involvedness of the code, so I dump hints in scripts that will live for half an hour.

Floegipoky
Java already had limited generic type inference, and 10 just introduced local variable type inference: https://developer.oracle.com/java/jdk-10-local-variable-type...

That being said the only thing I've run on jre10 so far has been Scala, so ymmv.

Karrot_Kream
I think the crufty type systems and semantics of Java (pre-1.7) and C++ did drive a lot of programmers away from static typing, but most recently popular languages (Go, Rust, Scala, Typescript, Kotlin, Swift) have been statically typed.
slaymaker1907
I don't think it is fair to say that TypeScript is not in common use and TypeScript has a way more complicated type system than Java and C++, particularly since it uses structural typing rather than nominal typing.
danbolt
I was thinking the same thing about TypeScript too. It's certainly catching up, and its a great way of having a gradual shift from dynamic to static types as your requirements crystallize.
nabla9
There are very few untyped languages.

* Distinction between dynamically typed and statistically typed is clear.

* Weak versus strong typing has less clear definition. Usually weakly typed means many implicit conversions between data types and representation.

Most programmers like the late binding approach. That is easy to do with dynamic typing. Adding static typing or type inference for late binding is more difficult.

Programmers usually hate the "weak" typing aspects of their dynamically typed languages when they are chosen wrong. JavaScript is good example of choosing implicit type conversions in a way that leads to confusion.

Jweb_Guru
> Distinction between dynamically typed and statistically typed is clear.

Not so clear as you might think. For example, it's relatively easy to construct a typed version of the untyped lambda calculus using isorecursive types.

nabla9
That's completely abstract example, not relevant to actual practice.

It's hard for actual programmer make that mistake.

Jweb_Guru
It's a concrete example, but I understand your point that "in practice" we know what dynamic vs. static typing means. Personally, I think a lot of that just comes down to marketing, though :)
seanmcdirmid
There are many PL researchers in the dynamic typing camp, “generally” is hardly justified.
emodendroket
I feel like all the hype for dynamic languages has led to a lot of backlash in the opposite direction.
HeyImAlex
For me it was writing unit tests for python where you’re pretty much manually checking types. Or running slow one off processing scripts that fail at the last step because of an issue that static types would have caught. Writing the same thing in go usually takes the same amount of time, runs 5x as fast, and works the first time.
traverseda
So, use python and static type annotations for the best of both worlds?
Kenji
The type system in C++ (with templates) is actually incredibly powerful.
drb91
> programmers tend to prefer untyped (dyanamic?) languages to typed languages

I have not found this in practice. I have found that some developers prefer a dynamically typed language for certain situations, like rapid prototyping. However when maintaining code bases, I've found that developers often bemoan the lack of static types to assist in small modifications to code they haven't touched in a while.

Personally I love the ability of static, declared types to organize assumptions about code; I also love the ability to do terrible things to types to shoehorn poorly designed third party libraries into my codebase. I don't see it as a developer preference so much as dependent on the problem domain.

None
None
pixelrevision
For me it's typically weighted less on the type system and more the fact that I don't need to wait for a compiler or use any fancy tools to get a quick idea realized.
stcredzero
If you remove a lot of the cruft and redundancy and add memory management, a statically typed language with fast compiles can have some of the benefits of a dynamic language.

If I could have a dynamic language, and static typing, that would be the best of both worlds. However, gradual typing doesn't seem to have caught on.

None
None
maxxxxx
With C# you can mix dynamic and static to some degree. In some cases like parsing JSON dynamic code can be much shorter. But for maintainability the dynamic code parts should be kept localized and to a minimum/
dvlsg
Works pretty well in typescript, too.

I don't find myself actually using gradual typing though. Not very often at least. Even in prototypical stages of an application, I usually just put together the types of what I currently have, even if I think they will change later.

maxxxxx
Typescript is pretty similar to C# from what I can tell. Not a surprise considering that the designer is the same.

I have found it very useful for things like translating incoming JSON or XML to the typed world. As long as the input data is of known format dynamic code is much shorter and concise.

btilly
Gradual typing has a strong tendency to turn into never-typing because when you've got it running there is no real point to going back and adding types. Which divides people back into those who want dynamic typing and those who want static typing.
stcredzero
Tracing JIT VMs gather the information on what the types actually are at runtime. I wish that such JIT VMs running server software would gather such information, which could then be automatically applied.
btilly
The JIT VMs gather that information, but then are careful to hide all of the typed calls behind run-time checks for the validity of the types. Failing to do so is an invitation to create bugs on rare use cases that didn't happen to be in your run. Making those code changes either results in duplicated auto-generated code that nobody looks at, or results in a source of possible bugs.

To give a concrete example, suppose that you have a weighted graph whose weights are integers. Code to find a Minimum Spanning Tree will always have an integer for the weight. Great. But if you add that type annotation, you'll run into trouble when someone checks whether the previously found MST is unique by calling the same algorithm again using as weights (weight, edges_in_common_with_prev_mst). (This will find the MST that shares as few edges as possible with the original, so if there are > 1 you'll find a different one.) Those ordered pairs were just fine in the original code, but don't fit the type annotation.

stcredzero
The JIT VMs gather that information, but then are careful to hide all of the typed calls behind run-time checks for the validity of the types. Failing to do so is an invitation to create bugs on rare use cases that didn't happen to be in your run. Making those code changes either results in duplicated auto-generated code that nobody looks at, or results in a source of possible bugs.

Of course, you want to use data correctly. There was a study of server code in Python that found that lots of playing fast and loose with type information happens just after startup, but then types settle down. If the initialization can be isolated in certain parts of the code base, it would make such date easier to use.

drb91
I think this fits squarely into the “rapid prototyping” scenario. Perhaps if you had to design your software to very tight constraints you might find static typing to be a productivity boost.
scarface74
I’ve been developing with compiled languages since the early 90s and even with a computer of that era, compiling within a decent IDE hasn’t slowed me down.
pixelrevision
It's not some hard rule. Just pointing out that a lot of these decisions can have very little to do with the type system.
laythea
Vote for Typed languages here, every day of the week. Why introduce yet another (and unnecessary) sub-game of "find the type"? Keep It Simple!
nickodell
I agree.

I recently tried to port an ~11k SLOC object-oriented Perl program from GTK2 to GTK3. The lack of types was maddening. I would have some simple goal, like "find the path of $image." I would immediately run into a roadblock: what type is $image? Is it a string? A file handle? A Gnome2::File?

So I trace the variable back. It's a parameter, so look at one of the places that calls it. That callsite uses a global variable. Aha! I go to where it's initialized. It's initially set to undef. Dead end.

I couldn't help but think, "I wish I was programming in Java right now." That was a new low for me.

jmpeax
I find it frustrating how he's against "types" but shows the GCD algorithm where x and y must adhere to a type that can be compared with equality, admits an ordering, and has a subtraction operator. Is the algorithm defined for the reals? Can we use a partial ordering? I'm skeptical of any proposal that seems to have missed even a superficial coverage of existing knowledge/literature on the topic.
scentoni
I noted that the one question in the video was asked by https://en.wikipedia.org/wiki/William_Kahan , the main architect of https://en.wikipedia.org/wiki/IEEE_754 floating point. Both Lamport and Kahan are Turing Award winners. https://en.wikipedia.org/wiki/Leslie_Lamport
cwyers
The assertion here seems to be that TLA+ isn't a programming language? I'm not sure that makes sense to me.
pron
It most certainly isn't. It is generally impossible to interpret/compile a TLA+ algorithm efficiently (i.e. as a program that runs with similar complexity to the algorithm) because of the inherent nondeterminism in TLA+. OTOH, it is precisely this nondeterminism that allows you to use TLA+ to describe any algorithm/system at any level of detail. In fact, you can show that a language with similar descriptive power to TLA+ cannot be a programming language, and that no programming language can have a similar descriptive power (you could embed a specification language inside a programming language, but then you arguably have two languages).
cwyers
Since when is it a requirement that a programming language be efficient?

My larger point is that he seems to be using "programming language" as a shorthand for an imperative, typed programming language. A lot of what he says doesn't seem to apply very well to SQL or Prolog or Lisp, for instance. (I say this having only skimmed the slides, I don't have two hours for this right now.) I won't say that it's a straw man, because there are plenty of programming languages that fit the way he's using the term and some of them are incredibly popular, but it seems to be a definition that leaves out a lot of programming languages.

pron
> Since when is it a requirement that a programming language be efficient?

It is (generally) a requirement that if you're writing a linear time algorithm, then the compiled/interpreted language will be executed in linear time. This is not the case for TLA+. You can describe linear time algorithms in a way that can only be compiled into, say, an exponential-time program, and possibly even not at all (i.e., extracting an implementation is undecidable).

I wrote about this at some length here: https://pron.github.io/posts/tlaplus_part3

> My larger point is that he seems to be using "programming language" as a shorthand for an imperative, typed programming language

He means a language with the property I mentioned above. It is somewhat true that Prolog has a tiny portion of the expressive power that comes from a truly nondeterministic language such as TLA+, but it still falls under "programming language." If you want to get technical, a language such as Prolog can let you program with some limited use of quantifiers. In TLA+, you can specify using unlimited use of quantifiers ranging over infinite and even uncountable domains.

In TLA+ you can describe the property "decides halting in linear time," and then show that a specific implementation can do so in some special cases for example. No programming language can do that (i.e. describe non-computable "algorithms") but it is essential for specification languages.

Jweb_Guru
> No programming language can do that (i.e. describe non-computable "algorithms")

Coq certainly can describe properties like this; they're just uninhabited types (all the necessary concepts needed to define "decides halting in linear time" can be defined over an inductive definition of the step function for the programming language of the halting decider; or if what you in fact meant was "a type that represents programs that halt in linear time", it's even easier and requires the same concepts, but a less powerful programming language). There is no specification sublanguage required. I find your distinction to be pretty iffy. Of course, Coq is not a very traditional programming language, but it is a programming language nonetheless.

(I note that you claim that Coq separates types and terms into two different semantic and syntactic categories, which isn't really true. The set model, which is the standard model that justifies the consistency of the calculus of inductive constructions, makes no such distinction!).

pron
Typed programming languages (any typed formal systems, for that matter) are made of two languages, type level and term/object level. While the two are strongly related, they are separate (a term can inhabit a type, but not another term). The level of nondeterminism required for specification exists at the type level and not at the term level, while programming (in the sense I defined above) exists only at the term level. This is what I meant when I said above that a specification language can be embedded into a programming language (this doesn't require types, but can also be done with contracts), but those are still two languages, even if they share syntax. If you like, you can think of TLA+ as being entirely at the type level.
Jweb_Guru
My point is that there are models of (essentially) Coq where there is no distinction between types and terms; both are translated into sets, in a fairly straightforward way. Additionally, one of the major points of dependent type theory is that reduction can occur anywhere, including directly within a type; i.e. term and type cannot be defined separately from one another. I'll go further and say that the fact that a term can only inhabit a type is in some sense definitional, since types are also terms; I can quite easily perform reduction and programming in a term that includes quantifiers, which you would say was part of the nondeterministic type level (in other words, terms do inhabit other terms). So it is extremely unclear to me that the distinction you are making is fundamental.

(To be clear, I do agree that there is a difference between specifying a type and synthesizing an inhabitant for that type; I just think that the "programming-specification" distinction you are making is somewhat artificial, with dependently typed languages demonstrating this particularly effectively by letting you program within your specification as well).

pron
> My point is that there are models of (essentially) Coq where there is no distinction between types and terms

I don't think this is true for Coq. Ultimately, Coq relies on a type/inhabitant separation. When you spoke of Coq as a programming language, the terms you can always "run" are only the inhabitants.

> So it is extremely unclear to me that the distinction you are making is fundamental.

I don't know if it is entirely binary, but it is sufficiently precise for useful terminology. A programming language is a language that can be used for programming, namely it is a language whose every term can be mechanically interpreted in an "efficient manner."

> dependently typed languages demonstrating this particularly effectively by letting you program within your specification as well

I don't think that any existing dependently typed languages do anything particularly effectively (neither programming nor specification), but that is beside the point :) It is certainly possible to combine a specification language and a programming language, but not without some formal delineation or the experience would be frustrating (i.e., it is possible to interpret some subset of TLA+ terms, but I am not sure it can be syntactically easy to determine which). This is, however, something that TLA+ tries hard to avoid as a design principle, because we do not yet know of any feasible way to scale specification from the "program level" to arbitrary levels of interest. Unlike Coq, TLA+ was designed not as a research tool for logicians, but as an industrial-strength tool for practitioners (I wrote about the design principles behind TLA+ in part 1 of my series: https://pron.github.io/tlaplus).

Jweb_Guru
> I don't think this is true for Coq. Ultimately, Coq relies on a type/inhabitant separation. When you spoke of Coq as a programming language, the terms you can always "run" are only the inhabitants.

Firstly, yes, it is absolutely true for Coq (or at least, a large subset of it). The set model is essentially the entire basis for CiC's claim to consistency. I'm fairly sure TLA+ also has such a model (somewhere).

Secondly, all terms in Coq, including types and universes, are inhabitants of some type, so there is no distinction between "the inhabitants" and all Coq terms (it's true that weak head reduction leaves products and other obvious types alone, but that's not the case for stronger forms of reduction like strong normalization, or for full conversion). Moreover, the conversion that occurs in the static phase (typechecking) has the same semantics as the execution during the dynamic phase (runtime). So I really don't understand what you're driving at.

> I don't know if it is entirely binary, but it is sufficiently precise for useful terminology. A programming language is a language that can be used for programming, namely it is a language whose every term can be mechanically interpreted in an "efficient manner."

"Efficient manner" is pretty vague. I suspect there are widely used programming languages that will differ by 3 or 4 orders of magnitude while evaluating semantically identical expressions, while nonetheless both clearly being programming languages. Besides, many optimizations are only optimizations for certain data distributions, and if the compiler guesses wrong it can hurt performance; but that doesn't seem to me to have anything to do with whether something is a programming language or not. If all you care about is the default reduction, Coq's certainly tries to be fairly efficient: beyond optimizing the usual lazy reduction, it exposes many other reduction strategies, tuning knobs to tweak how reduction runs to improve performance when domain information is available, and even contains multiple implementations of some reduction strategies so people can select the appropriate one for their use case.

> I don't think that any existing dependently typed languages do anything particularly effectively (neither programming nor specification), but that is beside the point :)

I'm well aware of your biases here. Regardless of their origins in PL research, plenty of useful work has been done in dependently typed languages, including much larger developments than any I'm aware of in TLA+. I'm not really interested in arguing their relative merits--clearly, both are useful--I'm just asking you not to pretend to make general statements about programming languages while actually talking about TLA+.

pron
> The set model is essentially the entire basis for CiC's claim to consistency.

I am not saying that types do not have a set model, but that types and inhabitants are fundamentally separate. Or, put another way, a lambda term specifies a single (parameterized) execution (up to evaluation strategy), and that is the only thing that can be made to "run" (I'm sure there are ways to extract, say, a sole inhabitant of a type, but types cannot in general be turned into executable programs).

> "Efficient manner" is pretty vague. I suspect there are widely used programming languages that will differ by 3 or 4 orders of magnitude while evaluating semantically identical expressions, while nonetheless both clearly being programming languages.

I'm talking about exponential differences in time complexity, or even infinite ones. You can easily specify "computations" with infinite (or even uncountable) nondeterminism in TLA.

> I'm just asking you not to pretend to make general statements about programming languages while actually talking about TLA+.

My (and I think, Lamport's point) is that there is a fundamental difference between a specification language and a programming language. It is certainly possible to mesh the two and call the result "one language", but then it must necessarily contain two deductive systems, and so I think the distinction can still be made. Even if you object to this distinction, you would agree that the "specification parts" of Coq (i.e. those with significant nondeterminism and use of quantifiers) cannot be generally mechanically executed in an efficient manner. So whether it's two languages or one, there is a fundamental difference between those expressions that can be used for programming (little nondeterminism/use of quantifiers) and those that are necessary for specification (unrestricted nondeterminism/quantifiers).

Jweb_Guru
> I'm talking about exponential differences in time complexity, or even infinite ones. You can easily specify "computations" with infinite (or even uncountable) nondeterminism in TLA.

SQL joins are worst-case exponential in the number of relations joined, where n = the total number of rows in the database (though there are many special cases that can help reduce this). I believe many people nonetheless consider SQL a programming language--with WITH RECURSIVE, it is Turing-complete. You could probably talk me into believing it's more of a "specification language," but despite that people use it for performance-critical things (admittedly usually in one of the special cases where it's better-than-exponential, but such special cases exist for most intractable problems; it doesn't seem to me that the nature of the language can be thought to fundamentally change depending on the data set).

> Even if you object to this distinction, you would agree that the "specification parts" of Coq (i.e. those with significant nondeterminism and use of quantifiers) cannot be generally mechanically executed in an efficient manner.

I specifically object to this! Coq executes what you term the specification parts in a very different way than trying to exhaustively enumerate solutions--one that is actually quite efficient and can be mechanically executed. Essentially, rather than trying to eliminate the quantifier, it tries reducing the type, and the body bound by it. A key point is that this mechanism isn't special-cased for the "specification" parts--it's exactly the same reduction mechanism used everywhere in the program, and indeed it's very important for conversion that reduction behave this way. Thus, you can indeed efficiently, mechanically execute a type in Coq. Once it reaches normal form, of course, you can't perform any more reduction--but from the Coq kernel's perspective, up to eta conversion, types in normal form are distinct from an enumeration of all their inhabitants, meaning it is indeed doing the right thing here.

In other words: Coq does not allow you to only program with non-type terms, and it doesn't make a clean distinction between inhabitants of types and types themselves. Moreover, while for quantified parts this isn't usually that interesting, for nondeterministic parts (such as match expressions on unknown variables) it can actually be quite important; for example, dependent elimination within the return type of a match clause can determine the index of a value of an inductively defined type, thus actually affecting program execution (and vice versa, of course; for instance, a type can be constructed representing n iterated lambdas, where n is an unknown value that may either be specified at runtime or at compile time. It is this latter case that gives rise to techniques like proof by reflection that may approximate an exhaustive search in order to produce a proof object--but they do so because the user wrote a program specifically to do exhaustive search, not because there's some exhaustive search mechanism built into the language).

pron
> I believe many people nonetheless consider SQL a programming language

Yes, but I don't think anyone considers a SQL join to be a description of a linear-time algorithm.

> Coq executes what you term the specification parts in a very different way than trying to exhaustively enumerate solutions--one that is actually quite efficient and can be mechanically executed.

I never said you always need to exhaustively enumerate solutions. Nevertheless, the problem of extracting a computation from a specification (e.g., an inhabitant from a type) is undecidable in general[1]. Hence, specifications cannot be generally compiled, therefore they are not programming languages.

They could be a new kind of programming language, where some search algorithm could sometimes find a computation, but that would be a "programming language" in a completely different sense; Lamport does acknowledge that possibility (I know some people are researching program generation, but the results so don't look very promising, and in any event no one expects them to ever be more than heuristic, even if they're ever effective).

[1]: I'm not sure what happens if you know the type is inhabited. I guess it will be decidable in that case (just as finding a proof of a proposition that's known to be a theorem), but the search is still intractable, and there is no way the resulting computation would be efficient.

Jweb_Guru
> Yes, but I don't think anyone considers a SQL join to be a description of a linear-time algorithm.

This is the first mention either of us have had of "linear-time" outside of a discussion about specifications. There aren't many programming languages that only let you write linear-time algorithms, and I think that's very far from most people's definition of a programming language. That being said, it's not like you can't write linear-time algorithms in Coq; its reduction machine is pretty efficient w.r.t algorithmic complexity (and it's likely to gain native machine integers and arrays soon to bring the constant factor in line with this). Proving linear-time execution for a native Coq function is another question entirely, of course, but you can always model a specific reduction machine in Coq and analyze that instead.

> Hence, specifications cannot be generally compiled, therefore they are not programming languages

I really don't agree with you on this. You have a fixed notion in your head if what it means to compile a type (attempt to find an inhabitant), but that is not how Coq interprets types at all; from its perspective, types are terms of independent interest, and are moreover themselves inhabitants of some other type. We don't expect a program to try to "find the inhabitant" of a term of type boolean when it's compiled; we want to keep reducing the term until it's in normal form (and, hopefully, our theory is consistent so it will be either true or false). In Coq, the same sort of reasoning (more or less) applies to types; it just keeps reducing it until it reaches normal form. There's no special notion of "finding an inhabitant" as distinct from normal program execution, which is exactly why I claim that Coq does not make the "specification-program" distinction. I'm not sure if I can explain it any more simply. I am not saying this is not a meaningful distinction in TLA+, or that it's not a real concept, but it's not the only way of thinking about things.

> I'm not sure what happens if you know the type is inhabited. It might be decidable in that case (just as finding a proof of a proposition that's known to be a theorem), but the search is still intractable, and there is no way the resulting computation would be efficient.

Depends on the problem. In Coq, for decidable problems you can write a decision procedure that is tailored to the individual type, having it produce a proof of the proposition you desire as a byproduct (in fact, in general producing such a procedure is how you prove that the problem is decidable!). Sometimes executing it is tractable and sometimes it isn't, but I don't agree that writing an algorithm to solve such a problem automatically transforms your domain from "efficient, everyday programming" to "ineffecient, specification-driven programming." Moreover, a lot of the time, there will be an easy inefficient decision procedure and a much harder efficient one, but given that they're both written in the same language and solve the same problem it seems hard for me to imagine that one would be considered "ordinary programming" and the other "program extraction" or whatever.

(In response to your [1], tactic languages like Coq's Ltac, much maligned as they are, are basically an attempt at interactive program synthesis; people rarely write complicated proof terms by hand. So I'd say these ideas have been a success, though hardly an unqualified one. Of course, one can argue that it's not "really" synthesis because you're not synthesizing the whole program from nothing but the specification, but even in ordinary program synthesis most successful solutions are based on sketching, where a partial program is provided and the synthesis engine fills in the gaps).

pron
> This is the first mention either of us have had of "linear-time" outside of a discussion about specifications.

I mentioned it before, and my point isn't about linear time in particular. If you write a specification of an algorithm of time complexity T(n), the expectation is that interpreting the algorithm would take p(T(n)), where p is some polynomial (i.e. no exponential slowdown or worse). This is what I mean by "efficient".

> but that is not how Coq interprets types at all; from its perspective, types are terms of independent interest, and are moreover themselves inhabitants of some other type.

That doesn't matter. I can specify a function by giving it a type -- say it's a function from the even naturals to their Goldbach pairs. Turning this into a program requires finding a lambda term that computes such a function.

> There's no special notion of "finding an inhabitant" as distinct from normal program execution

What I meant was that a type in Coq is not a computation (although it could be the result of a computation), and a computation is not a type (although it could produce a type -- Pi types).

> I am not saying this is not a meaningful distinction in TLA+

In TLA+ there is no such distinction because you cannot talk about a program that computes that function, but always the set of all programs that do (you can, of course, give more and more detail to restrict the set, i.e., create subtypes, but there is no "the computation"); that set is always either empty or infinite. In fact, if it's non-empty, it's always too big to even be a set (it's a proper class).

> Sometimes executing it is tractable and sometimes it isn't, but I don't agree that writing an algorithm to solve such a problem automatically transforms your domain from "efficient, everyday programming" to "ineffecient, specification-driven programming."

I am not talking about writing an algorithm to solve such a problem automatically, but about a programming language that always does so.

> So I'd say these ideas have been a success, though hardly an unqualified one.

They have been an abysmal failure when it comes to synthesizing anything that could be considered a replacement for programming. But we're talking about two radically different measures of success. I'm talking about a commercial process of turning lead into gold, and you're talking about turning a couple of lead atoms into gold atoms in a particle accelerator.

Jweb_Guru
> No, I mentioned it before, and my point isn't about linear time in particular. If you write a specification of an algorithm of time complexity T(n), the expectation is that interpreting the algorithm would take O(T(n)). This is what I mean by "efficient".

What algorithm? There are many algorithms that inhabit the same type. A specification that's detailed enough to be able to analyze its complexity is usually a program... and it's easy to write a program that runs in linear time in Coq.

> That doesn't matter. I can specify a function by giving it a type -- say it's a function from the naturals to their Goldbach pairs. Turning this into a program requires finding a lambda term that computes such a function.

In Coq, you cannot specify a function by giving its type. First, CiC is not an extensional type theory in general; it cares about how things are computed, not just what the answer is. Secondly, for many interesting function types in Coq, there are an infinite, or even uncountable, number of inhabitants (for example : Prop -> bool). Moreover, in Coq's core type theory, even if it could automatically find all lambda input-outputs pairs that inhabited some type, that would not be justification to replace the quantified type (you need to assume the function extensionality axiom if you want this).

> I didn't say there was. What I said was that a type in Coq is not a computation, and a computation is not a type. Just as a proof is not a proposition and a proposition is not a proof.

A type in Coq is no more or less a computation than a term is. I don't understand what this has to do with "specification vs programming." As for "a proposition is not a proof", I think that is the main point of significant disagreement between us. Why do you think True doesn't prove Prop, for instance? It may not be the most interesting proof in the world, but neither is the proof I of True (and in many cases, proofs of Prop are very interesting in themselves, particularly in the context of dependent elimination).

> In TLA+ there is no such distinction. You cannot talk about a program that computes that function, but always the set of all programs that do (i.e., just the type).

Okay, then neither of them have such a distinction. What you seem to be missing is that in Coq, the mechanism for manipulating types is not different from the mechanism for ordinary program evaluation; it looks like TLA+ uses an axiomatic formulation rather than computation rules, so if you were to add evaluation to TLA+ you'd probably need a new set of semantics, but that's not the case for Coq.

> I am not talking about writing an algorithm to solve such a problem automatically, but about a programming language that always does so.

As you noted, such a programming language is impossible, so I don't understand why you are talking about it. The impossibility of that hypothetical programming language doesn't mean you can't write a single unified programming language (without two different "parts") that also functions as a specification language, and I genuinely don't understand why you think it does.

> They have been an abysmal failure when it comes to synthesizing anything that could be considered a replacement for programming.

The state of this has been steadily improving! For example, https://www.cs.utexas.edu/~isil/sypet-popl17.pdf looks quite promising for everyday programming tasks.

pron
> A specification that's detailed enough to be able to analyze its complexity is usually a program...

Not necessarily, and I would claim that it usually isn't, but to see that you have to pay close attention to how algorithms and systems are specified. I give the example of the QuickSort algorithm at the beginning of this post: https://pron.github.io/posts/tlaplus_part3

> In Coq, you cannot specify a function by giving its type.

Of course you can; that's the meaning of the type -- it's just not enough to get a computation. That's precisely the point I was getting at. A specification is a description of something at some arbitrary level of detail, but languages that allow compilation -- i.e. programming language -- normally require a minimum level of detail for that to be possible. In a language like Coq, this requirement is a strict syntactic distinction between types and lambda terms.

> First, CiC is not an extensional type theory in general; it cares about how things are computed, not just what the answer is.

Precisely. This is absolutely necessary for a programming language, and undesired in a specification language. (BTW, in TLA, what and how is arbitrary, because the temporal logic lifts everything that in Coq is operational into the denotation)

> A type in Coq is no more or less a computation than a term is.

A type is not a computation. To "run" something, or to speak of the operation of something, that something must be a lambda term.

> the mechanism for manipulating types is not different from the mechanism for ordinary program evaluation

I know that (I don't know Coq, but I know a bit of Lean, which is similar, I think), but that doesn't matter. You can compute with types, but there is a strict distinction between the common form of nonderterministic specification -- types -- and actual computation (not types).

> As you noted, such a programming language is impossible, so I don't understand why you are talking about it.

Because we're trying to work out the difference between a specification language and a programming language. A programming language is one that, when it describes a computation, can always be interpreted efficiently in the sense I meant. A specification language should allow specifying computations at arbitrary levels of detail. I claim that a specification language cannot be a programming language and vice versa -- except if we had this magical language I described. What you can do, and Coq does, is combine two languages in one, but even then the result requires a delineation between a computation and a specification (arbitrary nondet).

> For example, https://www.cs.utexas.edu/~isil/sypet-popl17.pdf looks quite promising for everyday programming tasks.

I honestly do like the research, but this is going from converting two atoms of lead into gold in a particle accelarator to five, or like saying that Olympic high jumpers are steadily improving in jumping all the way to the moon... This is another discussion, but I like the research and I like the practice, but I don't buy the claims that such improvements in research put changes to practice on the horizon. The gap is still huge, and I also think some of the research (especially in programming language theory) is done based on assumptions that completely fail in practice (I wrote about that here: https://pron.github.io/posts/people-dont-write-programs)

None
None
cttet
Don't have the chance to watch the video but highly disagree with the title.

Math as a language is just a DSL which carries a lot of historical burden that is not friendly at all. Programming languages can achieve same level of abstraction with more clarity. Math instead added too much specific syntactic sugar(bad is multiply(b, a, d), log a is log(10, a) or log(e, a)), operator overload (/ for divide and dy/dx for differentiation operator, same grammar for scalar and matrix multiplication, one of which is communicative while the other is not), and complicated design (integral would be much clearer if we write integrate(target=ax, over=x, from=0, to=10) instead of the original one).

wodenokoto
The talk is about TLA+, which is somehow not a programming language, yet you follow a syntax and describe algorithms and then I'm not sure what happens.

Maybe you prove your algorithms and call it a day, or maybe you can ask your program to run it, like an external dependency.

It's very unclear how TLA+ is neither programming or how it fits in with your programming. But he is not asking you to do normal math notation.

pron
> Maybe you prove your algorithms and call it a day

You can do that, or check it with a model checker (which is much easier). Either way, you verify your algorithm rather than "run" it (although the model checker lets you simulate or "run" a random execution).

dchichkov
I thought more progressive approach is phasing out incomplete and obscure Greek single letter notations. Like done by OpenAI, or by "Structure and Interpretation of Classical Mechanics" or in the "Functional Differential Geometry".
untangle
A Programming Language (APL) started life as a mathematical notation. As such, it is the closest thing that I have encountered for expressing algorithms as code. It is the embodiment of "freeing the hidden algorithm." Fifty examples of such are given in the following link:

http://www.jsoftware.com/papers/50/

ryanmarsh
With all due respect to Leslie Lamport his "what could be more beautiful" explanation of Euclid's algorithm is no easier for me to make sense of than these[0] various implementations.

For a "complex" problem[1] you cannot write a unified field definition[2] before the fact. Otherwise the problem would have to be merely complicated (most sufficiently useful systems are complex). The easiest way humans can make sense of a complex problem is to define a set of misfits[2], or tests. In practice efforts to find a better way of expressing the hidden algorithm of any given software system have failed. Yet "tests" have proven themselves as a sustainable way for humans to iteratively arrive at the most suitable solution to a complex problem and to guide understanding after the fact.

tl;dr a more elegant programming language is nice but does not solve the fundamental problem with human minds and solving complex problem domains

0: http://www.codecodex.com/wiki/Euclidean_algorithm

1: Cynefin, Dave Snowden

2: Notes on the Synthesis of Form, Christopher Alexander

pron
I can't say I fully understand what you're saying (you use some uncommon terminology without defining it), but expressing algorithms mathematically does not impose any style or order on the exploration of the problem. It just gives you a language that is far more flexible for talking about both your problem and your solution at any level of detail that you choose than any programming language.

TLA+ certainly allows you to describe a system as a set of scenarios/properties (what I assume you would call tests), and also of asserting that a design satisfies them.

There was a system I worked on that was so complex that I was only able to design it using TLA+; engineers at Amazon, Microsoft, Oracle and Intel report similar experiences.

None
None
gav
I have a feeling if more people read Alexander's books on design and patterns the quality of systems as a whole would increase. He is largely unknown in the tech circles I've been in.
None
None
microcolonel
Spreadsheets are computer programs, and |things that are reasonable to represent as a spreadsheet| is the wrong subset of computer programs to solve your problem.
dzdt
Lamport is famous in part for his work on LaTeX, the document typesetting language. It is Turing complete. Should no one write papers in LaTeX?
tincholio
I'd say that he's known as one of the fathers of distributed systems, and for winning a Turing award, more than for writing LaTeX. Also, it's not LaTeX that's Turing complete, but rather TeX. LaTeX is basically a bunch of TeX macros.

Also, the rest of your comment doesn't make any sense, either.

pessimizer
It's not a confusing comment. Lamport says that programming languages should only be used for programs. Lamport is also famous for using a programming language for typesetting.

A comment about the things you think he's more famous for doesn't make sense. A comment about LaTeX not being turing-complete, and citing as proof that it's largely a bunch of macros written in a turing-complete language (glued together with that turing-complete language) doesn't make sense either.

DSingularity
When you are writing a paper that requires latex it’s because it requires programming to properly set things up. Producing a typeset paper with latex is part writing and part programming.
Yoric
Well, in my experience, academics often write with LaTeX because that's what they know how to use. I've had to actually code in (La)TeX for some papers or slides, but that was pretty unusual. Most of the time, I was using it out of inertia, and because I trusted it more than Microsoft Word (which used to be very bad, at the time, I imagine it has grown better).
romwell
>Well, in my experience, academics often write with LaTeX because that's what they know how to use

It's all they know how to use because it's the only thing that actually works to produce results they need with minimal pain.

And you might not need to program in LaTeX, but everyone has their custom macros.

Word really hasn't changed that much.

emodendroket
Word has a lot of great features but is also easy to get going and use the completely wrong way -- great to get people to buy it but likely to lead to some bad experiences trying to edit documents.
whatshisface
I find LaTeX to be a lot better at typesetting than Word, even today. It is much better at deciding how much space should go between words and the equations are as refined as it gets.
None
None
aidenn0
I'm not sure how your comment pertains to the contents of the video, but considering it's been over 30 yeears since he started work on LaTeX, it wouldn't be that odd even if he were to advocate against using it.
fastball
As a side note: this presentation has a lot of wasted space in the video, and no offense to the speaker but I'd rather have a better view of his slides than his face. I can't even read some of the algorithms put on the screen!
fastball
Downvotes? You guys prefer to not be able to see the slides and to instead have a good portion of the video's space taken up by the speaker themselves? Am I alone in this?
yoz-y
I agree, the slides and speaker views should have been flipped.
zackmorris
I wasn't exposed to spreadsheets until a few years into college back around 1996 or 1997 maybe (I had been programming in C/C++ for 7 or 8 years by then). I wasn't taught matrix math until pretty late in the curriculum, I want to say junior or senior year. Also I was lucky to have a semester of Scheme but they were transitioning to teaching Java around the time I graduated (I don't know if they ever switched back). And this was for a computer engineering degree at one of the best state universities for engineering in the US.

Honestly I think it might be time to phase out teaching imperative and object-oriented programming. Most of the grief in my career has come from them. I don't care if they're where the jobs are. The mental cost of tracing through highly-imperative code, especially the new implicit style of languages and frameworks like Ruby and AngularJS (which have logic flows having no obvious connection to one another, or transition through async handlers connected by a convention which isn't immediately obvious to the developer) is so high that the underlying business logic is effectively obfuscated.

I think we should get back to fundamentals. Maybe start with the shell and explain why separate address spaces connected by pipes are such a powerful abstraction, maybe touch on the Actor model, maybe show how spreadsheets and functional programming are equivalent, and how even written notation is superfluous. Really focus on declarative programming and code as media, and how that made the web approachable by nontechnical folks before it was turned into single page application spaghetti. There are so many examples from history of where better ways were found before the mainstream fell back into bad habits.

quickthrower2
I disagree that they should phase out imperative / OO programming, but from my experience more emphasis on different languages i.e. throw in a ML (as in Meta Language) and a Lisp, and most importantly some information about the trade offs.

However it would require a culture shift as a university being a place to get you ready for a career in industry rather than academia.

Because you are prepping for academia it is OK for them to postpone functional to later years on even the PhD.

You mention grief with OO - but I think a lot of it you would get with Functional too and it has more to do with the incentives and people structures in organisations that produce code. Usually praise goes to those who get a 8 hour Jira ticket done in 4, or a 5 week task done in 4 weeks etc, and from the users point of view 'it works'. The structure of the code is not discovered until later.

Code reviewers are working at the deckchairs level, they are unable to change the Titanic's direction, and again both parties in a code review have incentive to do it as quick as possible (while not looking like they obviously brushed over it), so it become mostly a potential bug hunting and syntactic cleanup exercise.

It's almost a running joke that refactoring rarely gets done, and if it does it is trivial, and usually required some stealth from a developer or manager to create cover to do the refactor.

The only hope we have is to work somewhere where there is a good understanding of code quality from top to bottom in the org, or at least when you cross the technical/non-technical boundary there is a high degree of trust to let the tech people do the right thing, and not KPI them into submission.

mcphage
> from my experience more emphasis on different languages i.e. throw in a ML (as in Meta Language) and a Lisp, and most importantly some information about the trade offs

Most CS curricula have a course where you spend time programming in a variety of different programming languages, in different paradigms.

badpun
> Code reviewers are working at the deckchairs level, they are unable to change the Titanic's direction, and again both parties in a code review have incentive to do it as quick as possible (while not looking like they obviously brushed over it), so it become mostly a potential bug hunting and syntactic cleanup exercise.

And that's why I despise code reviews. There's obviously not enough time allocated to them for in-depth understanding of the reviewed code, so I'm mostly spending precious resources (attention, energy) on doing a half-assed review that is not going to do that much good. It feels like such futile work.

zackmorris
You summed it up nicely. After writing my comment, I realized that imperative programming is so much more difficult than functional programming that in a way, it's the majority of the work of programming. Anyone can learn to use a spreadsheet, but it takes years of dedication to master debugging enterprise software. So there will always be huge demand for that skill, and so colleges should probably continue building it in students.

I'm still in mourning though imagining how far programmers could go if they weren't stuck endlessly debugging imperative code that will never be deterministic or free of side effects. Lots of lost potential there. I'm coming up on 3 decades of experience doing that and it feels like well over 90% of the code I've written was a waste of time. I guess it paid the bills though.

mcphage
> I realized that imperative programming is so much more difficult than functional programming that in a way, it's the majority of the work of programming. Anyone can learn to use a spreadsheet, but it takes years of dedication to master debugging enterprise software.

Anyone can learn to program spreadsheets, because spreadsheets realize that state is the most important thing, and puts it front and center—hiding the calculations. Most programming paradigms are about manipulating the calculations, and the state is only visible when the program is running. As long as programming tries to avoid state, it'll be hard for most people to learn.

seventhtiger
What you just said really struck a chord with me. I do tend to ignore state because it's less accessible. The result is that my code gets more esoteric and abstract while the changes I need to make to state come very slowly. On the other hand I've made spreadsheets as complicated as small applications I've made, but the cognitive load feels significantly lighter.

How can we bring state forward during development?

romaniv
>Honestly I think it might be time to phase out teaching imperative and object-oriented programming.

I have seen plenty of universities teach Java and C++, haven't seen any that teach actual OOP. James Coplien aptly calls the current paradigm "class oriented programming".

ThePadawan
This thought is why for a long time, Programming 101 at my alma mater (ETH Zurich) was taught in Eiffel. Obscure, but strongly opinionated towards OOP by way of strongly enforcing the paradigm.

Stop reading here.

(Since Bertrand Meyer, driving force behind the language and course, retired, the course is now taught using Java.)

outworlder
It triggers me to no end when I watch an introductory course, for people with no previous exposure to any programming language, and the teacher starts with

"public static void main()"

In order to understand it, you need to have a good grasp of classes, static methods, access controls.

This is usually followed up by a request to ignore the entire line, which is one of the worst habits you can have as a developer.

Then you have to compile this. It used to be the case that people would copy and paste command line excerpts (bad habit!), but now they will get a pre-configured "IDE" where they can punch the compile button (ok, but I have met plenty of professionals who never left this stage).

Instead, they could avoid the whole thing by using something else, say, Python, or even Javascript. In both cases you can quickly drop to a repl and start trying stuff out.

Once the basics are there, then you can proceed to object orientation. And perhaps even to Java as a more advanced course, only you now have the proper concepts to understand the very first line you are supposed to write.

Why would universities teach Java and C++ anyway? They are either 'boring' from a computer science perspective, or too convoluted for teaching concepts.

int_19h
The thing I love about Python as a teaching language is that you don't need to tell people anything about objects to get them started, and even to do some more advanced stuff (e.g. turtle graphics). And then you can introduce objects, and they light up the mental image of the world that is already there - because objects were part of the ride all along, just staying in the background.
scarface74
Why would universities teach Java and C++ anyway? They are either 'boring' from a computer science perspective, or too convoluted for teaching concepts.

Not a fan of Java by any means, but they should teach Java so graduating students can put that on their resume and get a job so they won't live with their parents until they are 35....

_emacsomancer_
This is presumably why they do teach Java.
userbinator
This doesn't really apply to C++, but I know that Sun was pushing very hard for Java in the early 90s, to the point of basically giving away hardware and course material to universities who would teach Java.
orangeshark
>This is usually followed up by a request to ignore the entire line, which is one of the worst habits you can have as a developer. That really bugged me when I learned Java at University. I was also unaware how the compiler built the code and ran it, it was all hidden by the IDE. I felt a bit satisfied by my data structures course which had a pretty decent book that explained a bit of this in the first chapter. I then decided to just switch to using vim and Make to build and run my school assignments.
chriswarbo
Anecdotally, I tried to teach myself programming in the summer before starting University. I went with Python since it's available by default on Linux, and managed to get a little CLI board game with a main loop that asked the user for their move, updated the game's state and printed the new board.

One of the first CS courses at University was OOP in Java. I really struggled to grasp OOP, for most of the first year. It didn't "click" until I tried doing it in Python; after that I went down the rabbit hole into meta-object protocols, Smalltalk/Self/Newspeak, etc.

Java seems to occupy the opposite of a sweet spot: it makes learning difficult for newcomers, yet it's very limited and restrictive for those with experience. Not only is it overly verbose and ceremonious compared to untyped languages like Python, but also compared to ML/Haskell, whilst being less expressive and less safe!

wruza
Why not just explain it in less detail? Public so that java can see it. Static so it just lives here, no need to create objects. And void is that no value is returned, we will just print our results. Java calls it as program runs, thus main(). Btw, object is something you can create and call methods – functions of that object.

Static may be hard to understand, but it is something you just can do. You can say(hi) and that requires nothing, but you cannot drive() without a car.

gmueckl
You can't teach that without touching all the concepts you mention. These would be almost all later parts of the course.
marcosdumay
> This is usually followed up by a request to ignore the entire line, which is one of the worst habits you can have as a developer.

It is usually a request to ignore it for now, because "we will get into each part latter". I fail to understand what is so bad about it. It's just not practical to provide all the theoretical foundations upfront, and postpone practice to the second semester.

Up to this day, I still think that the easiest way for me to learn a new language is to copy some hello world, and replace one line after the other once I get why they are there.

Terr_
I think of it a bit like how physics start with that frictionless vacuum, or skip relativistic stuff for later.
learc83
I was never very good at that--ignoring something until later when it's staring right at you.

I prefer teaching programming with a language that minimizes the boilerplate required to get started.

curun1r
To me, my intro to comp sci class nailed it by teaching Scheme. The first 30 min or so were spent introducing all the syntax wed learn in the entire class. The remaining several weeks were spent on CS concepts and learning to express ourselves to the computer. I've never used Scheme/Lisp professionally and nor do I have an urge to. But it was the perfect teaching language because it puts so little between the student and learning to program a computer.
munchbunny
> Honestly I think it might be time to phase out teaching imperative and object-oriented programming. Most of the grief in my career has come from them. I don't care if they're where the jobs are. The mental cost of tracing through highly-imperative code, especially the new implicit style of languages and frameworks like Ruby and AngularJS (which have logic flows having no obvious connection to one another, or transition through async handlers connected by a convention which isn't immediately obvious to the developer) is so high that the underlying business logic is effectively obfuscated.

I don't think this is a fault of functional programming as much as it is a fault of implicit language constructs. There are common conventions where being able to chuck functions and closures around is actually useful (e.g. sorting). These language constructs/conventions are useful because they can be extremely expressive, but unfamiliar implicit behaviors cost me much more "working memory" to understand when I'm reading the code.

There's a version of object-oriented programming that mostly sticks to declarative behaviors and uses inheritance, unions, mix-ins, etc. to actually hide behavioral abstractions. The problem is that it often takes 5-10 years of experience before the programmer (myself included) begins to appreciate how important future readability of the code is, and to understand that the more exotic language features should be used sparingly.

laythea
"...have logic flows having no obvious connection to one another, or transition through async handlers connected by a convention which isn't immediately obvious to the developer"

This is very true. I hate parsing code written like this, or code written using a mashup of concepts, requiring mind gymnastics to fathom an implementation of a solution to a relatively simple problem. It makes the job so much more unnecessarily difficult.

Points for simplicity if you ask me.

seanmcdirmid
Inverted control is a hallmark of many programming domains. It’s not like you can get the user to be synchronous with respect to the program they are using (unless it is a non interactive batch tool). Even functional paradigms like Rx have control flow running around all over the place, the only saving grace is if you can keep it encapsulates via a set of opaque data flow pipes (and often you can’t).
macspoofing
>I think it might be time to phase out teaching imperative and object-oriented programming. Most of the grief in my career has come from them.

Good luck. Computer Science degrees are balancing act between practical and theoretical. More importantly students want to work with technologies that are popular in the industry today. You can try to to stand against the tide, but you will lose.

Joe-Z
Would you mind elaborating on "why separate address spaces connected by pipes are such a powerful abstraction", or point me to some sources? Likewise if there's something you can recommend for reading up on the Actor model?
theoh
The phrase probably refers to Unix pipes: https://wiki.tuhs.org/doku.php?id=features:pipes
zackmorris
Ya in my mind, executables that do one thing well and are connected by pipes is one of the most proven models for getting things done. Even relatively nontechnical people can write shell scripts, batch files and macros that take some kind of data from a socket/file/pipe, pass it through a bunch of black boxes and spit out an answer. This is very similar to the Actor model and is much simpler to reason about than the shared state threads of languages like Java. Optimizations like copy on write can eliminate most of the overhead of sending data between processes.

Here's a quick overview of how the Actor model is better than shared state (mainly by avoiding locks and nondeterminism): https://doc.akka.io/docs/akka/2.5.3/scala/guide/actors-intro...

And just as an aside, I haven't fully learned Rust yet but from what I understand, the borrow checker only applies to mutable data. If you write a fully immutable Rust program then you can avoid it altogether. So to me, this is a potential bridge between functional and imperative programming. It also might apply to monads, since (from what I understand) languages like ClojureScript can run in one-shot mode where all side effect-free code runs completely and then the Javascript runtime suspends execution, only starting it again once new data is ready. This might also be a bridge between functional and imperative code, because I've found monads to be one of the weakest links in functional programming and I've never quite wrapped my head around how they work or if they're even a good abstraction. Maybe someone can elaborate on them!

In short, the goal here is to reduce all concurrent code to a single thread of execution that is statically analyzable as much as possible. So a typical C++ desktop app may not see much benefit from this, but anyone who has found themselves in Javascript callback hell can certainly appreciate the advantages that async/await provides, since it more closely approximates the message passing of Erlang/Go, which is more similar to shell scripting. If we had a "perfect" language that optimized all code to be concurrent as much as possible, for example by converting for-loops into higher order functions like map and then running them on separate threads behind the scenes, then we could offload the wasted work currently done by humans onto the runtime. So that would mean that instead of writing programs that, say, access remote APIs, cache the data somehow, remember to invalidate it when the source of truth changes, etc etc etc, that complexity instead could be reduced to a dependency graph that works like a spreadsheet and updates all dependencies when something changes. This seems to be where the world is going with reactive programming, with a lot of handwaving and ugly syntax because it's reimplementing old concepts from Lisp etc: https://en.wikipedia.org/wiki/Reactive_programming

Sorry this got a little long, I could ramble about the downsides of programming in these times for hours.

Joe-Z
Thank you for the follow-up and the links! I wonder if your suggestion would hold if it were put to the test and one were to implement a big and complex software strictly with the independent processes+pipes model. I could well see that you would wind up in some other sort of hell, because the problem of understanding lies not in the way it's implemented but in the complexity of the tasks involved.

>Sorry this got a little long, I could ramble about the downsides of programming in these times for hours.

Haha, programmer talk is the best!

steveklabnik
> I haven't fully learned Rust yet but from what I understand, the borrow checker only applies to mutable data

This is not correct. The borrow checker applies to references, both mutable and immutable.

drchickensalad
I think the intention was that if you have no mut refs, then you have no borrow checker problems.
steveklabnik
That’s not true. You still have things like dangling pointers being checked for.
zackmorris
Ok thanks for the followup, it was something I heard but hadn't tried in practice.
steveklabnik
No problem!
aidenn0
CSP[1] is also related.

For a quick breakdown of why CSP and the Actor model are both Good Ideas:

Think about any concurrent system (i.e. multiple threads of execution). Any modification of a shared resource by one thread (one concrete example is a global variable, but filesystem, devices &c. all apply too), is implicit communication that can happen at any point whatsoever in another threads execution.

The state of one thread being modified in unsynchronized ways in another inevitably leads to bugs that are very hard to reason about.

Both CSP and Actors involve removing the implicit communication and replacing it with explicit communication.

Two Unix processes connected by pipes are one example of this, since two processes do not typically share any writable memory.

I don't know if you've written any code designed to be used over a pipe. If you have, you may notice that you do not care at all when the process on the other side of a pipe writes to variables. You do not need locks or mutexes or semaphores or any of those constructs for IPC in this situation.

I don't know if you've ever written any multithreaded code with shared variables, but if you have, you've definitely noticed that you need to carefully hand-synchronize modifications to those variables.

Now of course, the only special thing about the unix processes example was the lack of mutable shared state, and the use of an explicit communication channel. You could write a single-process with many threads that communicate over such channels to avoid the IPC overhead of unix, but still get the simple-to-reason-about concurrency of processes.

Pretty much all of the ideas above was published before 1980.

My dad read Hoare's CSP paper when getting his Masters degree and he told me "80% of the class didn't understand it, and the remaining 20% of the class thought that manual synchronization was not a problem" which explains much of the buggy, highly concurrent software written since.

1: https://en.wikipedia.org/wiki/Communicating_sequential_proce...

Joe-Z
Thank you for the insight. What you're describing reminds me pretty strongly of pure functions. Which I guess makes sense, since "Functional languages can do concurrency out of the box!!" (well yeah, but you need to learn how to write truly functional code first :D)

>"80% of the class didn't understand it, and the remaining 20% of the class thought that manual synchronization was not a problem"

Unfortunately it seems to be this way with many things in our field (/life?). And the 80% non-understanders group doesn't even stay the same people for every topic! :)

aidenn0
Pure functional programs work because what kills you isn't shared state, but mutable shared state, and if you have no mutation, then you have not mutable shared state.

Unix processes work on the opposite side: you can mutate everything, but very little is shared.

With threading and impure code you need to not mutate shared things, and limiting yourself to message-passing is a good way to achieve this (of course you can't message-pass shared pointers or you've missed the point).

ezoe
Real hardware is imperative. Object-oriented programming was once treated like Functional programming. The languages which did implement OOP as it was originally thought was horribly inefficient. So practical languages took the good idea out of OOP for convenient syntax sugars.

Functional Programming will follow the same path. You probably don't like it for it will be ended up impure and just some syntax sugar that looks like FP, but rest of the world like it because it gets the job done.

vmchale
> Real hardware is imperative.

Wrong.

> So practical languages took the good idea out of OOP for convenient syntax sugars.

Also wrong, unless by "syntactic sugars" you mean "nontrivial compiler rewrites"

> Functional Programming will follow the same path.

Doubt it.

> You probably don't like it for it will be ended up impure

Purity does not constrain efficiency at all.

learc83
>Wrong.

At an abstraction level below digital circuits, you are correct. But thankfully the layers on top of that do almost always do a very good job of hiding it.

>Doubt it.

Functional programming is already following the same path. Sure there will non-hybrid languages around if you want to use them, but take a look at the functional constructs that have made their way into almost every procedural language in use today.

Most people have never touched Haskell, but almost everyone has seen a map function.

>Purity does not constrain efficiency at all.

Maybe not inherently, but there's a reason we have to break out and use mutable data structures when we really need performance. For the systems we are working on today purity does impact performance/efficiency.

Koshkin
> time to phase out teaching imperative

Knuth would probably disagree.

vmchale
> show how spreadsheets and functional programming are equivalent

They are not in fact equivalent, unless the functional programming you're doing is trivial.

mattnewport
Could you elaborate on how you came to be programming in C/C++ for 7 or 8 years before encountering a spreadsheet? That seems an unusual path even back in the 90s.
zackmorris
Ya I picked up my first programming book when I was 12, then started with HyperCard and Visual Interactive Programming (VIP) on the Mac Plus with my friend (another guy named Zack):

https://en.wikipedia.org/wiki/HyperCard

http://mstay.com/images/screens/vpc1.gif

So thought of code as a big flowchart where you fill in the boxes with business logic. Transitioned to C within a year or two and got incredibly deep into assembly language and low level code for blitters back in the 486 era when DOOM came out. Unfortunately moderately priced Macs were an order of magnitude too slow to play fullscreen scrolling games at 640x480x8 resolution until the 60 MHz PowerPC came out in the mid 90s. So never made any real money on Mac shareware games, but I digress.

The Mac had an informal programming environment (no console or office suite with standardized inter-application communication) and I hadn't seen MS Access or FileMaker yet, or spreadsheets. When I first encountered them for writing reports in college and it finally clicked for me that code didn't have to be run imperatively, it was devastating in a way. But I made a full recovery in the early 2000s when PHP/MySQL got popular and I was able to get back into rapid application development with ORMs. I feel like the world is about ready to make the jump from hands-on tools like Laravel to WYSIWYG tools like Wix. Most of the attempts I've tried are frankly terrible, but I'm optimistic.

scroot
If you compare the ease of "authoring" in the top hypermedia environment of its day (Hypercard) with doing the same today (the web) it's plain as day that things have regressed.

Here is a straight up comparison someone made if anyone is interested:

https://twitter.com/ecgade/status/1029795513514774529

Retric
Computers are still imperative, so all functional code is arguably syntactic sugar over that core causing a lot of leaky abstractions to show up all over the place.

I think the problem with Object-oriented programming is it's taught to soon. Start with Imperative then Functional then toss object oriented into your senior year.

cicero
I have heard critiques of functional programming, but not that there is a problem with leaky abstractions. Can you provide an example? Does it invalidate the discipline of trying to use pure functions when possible?

I learned to program using BASIC on an Apple II. All of the variables were global, and it wasn't until I got Apple Pascal that I had access to a language that had local variables. I immediately saw the advantage of the discipline of using local variables whenever possible because I had experienced the difficulty of tracking down where in the whole program a global variable might be changed.

It could be argued that local variables are just syntactic sugar over a global memory space, but no one credibly argues today that we should therefore only use global variables. Local variables make it easier to reason about the behavior of a program, and it appears to me that functional programming takes that idea even further.

aaaaaaaaaab
Purely functional languages come with the promise that a sufficiently advanced compiler will see through all that monadic functional cruft and run your code as well or even better than it would if written in an imperative language.

As we don’t yet have such an advanced compiler, impure hacks like `par` start seeping through the cracks.

kccqzy
What do you mean impure hacks like `par`? It's a parallelism primitive. You mean `seq`? In either case it's not impure. It's just a primitive that can't be expressed in the language itself. It's still referentially transparent (which is what people mean by pure).
Retric
Program execution time and memory footprint are great examples. Math does not care about these details, but just because two programs are logically equivalent does not mean they will behave identically.

I am very pro functional programming, but your mental model needs to be accurate to really dig into the details. As such you really need to understand ASM and procedural code or it will eventually bite you.

panic
I don't like how people equate the algebraic FP style of programming with "math" -- there are plenty of ways to model execution time and memory footprint using math, for example. A functional program is no more "mathematical" than an imperative one.
Retric
I mean as a mental model.

You can for example analyze a program assuming an ideal compiler (which I have seen people do), or you can understand the actual compiler used. The second is an implementation detail subject to change, but it’s also more accurate.

I recall one meeting when people said doing comparisons with a list of objects one comparison at a time should be as fast as doing the same thing one object at a time. I pointed out that due to cache issues doing each check an object at a time would be faster. A few people where shocked when the test showed a significant improvement.

ken
That's a curious place to draw the line. As we've seen in the past year, assembly isn't good enough. You need to understand your microcode or it will eventually bite you ... and you don't.

But that's all just nit-picking <<1% cases. When's the last time you saw a functional program whose execution time or memory footprint were unacceptable, and required knowledge of assembly language (or lower) to resolve? I can't say I ever have.

MaxBarraclough
Perhaps I'm missing the point here, but my understanding is that functional programmers (and especially Haskell programmers) tend to have a different problem: knowing what code the compiler can optimise well.

From what I've heard about tuning Haskell code for performance, much of it depends on the particulars of the compiler, rather than on the underlying CPU.

Retric
The best example I can give is not quite functional code. I have seen a lot of horrific SQL because someone did not really understand that Joins for example had a cost based on some understandable criteria. You need to have a basic mental model of what the database does to write good SQL.

I don't expect everyone to proficient in ASM let alone microcode, but I do expect them to at least understand that they exist. Beyond that I expect any decent CS program to enable someone to build a useful mental model of what's going on. Plenty of subject matter experts write very valuable code without that knowledge, but they tend to hit a very real wall.

Koshkin
> Computers are still imperative

"Computer Science is no more about computers than astronomy is about telescopes." — (Mis)attributed to Edsger Dijkstra, 1970.

AllegedAlec
That explains why astronomers are so incredibly enthousiastic about telescopes, and spend immense amounts of time trying to create better ones to overcome the limits of what they can do now.
Koshkin
Sure, everyone is enthusiastic about their tools. Still, it is not telescopes that are the subject of the science of astronomy.
AllegedAlec
It's not the subject, no, but the limitations on telescopes are what places limits on how much astronomy can do, and by improving their telescopes, and by improving how they use their telescopes, they can find out more than what they could before.
renox
Note that the detection of gravitational waves necessitated the modelisation of these waves first. So astronomy is needed as an input and also progress as an output of the telescope..
dmpk2k
Meanwhile, in the real world...
klodolph
I'm still convinced that (Mis)Quoting Dijkstra Considered Harmful. A Dijkstra quote derails a conversation about CS almost as surely as anything else.
slavik81
Developing computer programs is not computer science, either. The application of science to solve practical problems is "engineering".

Unfortunately, real programs run on real computers and engineers need to deal with that. In theory, theory and practice are the same. In practice, they are not.

LukeShu
> As a result, primarily in the U.S., the topic became prematurely known as "computer science"---which actually is like referring to surgery as "knife science"---and it was firmly implanted in people's minds that computing science is about machines and their peripheral equipment.

Edsger W Dijkstra. Mathematicians and computing scientists: The cultural gap. Abacus, 4(4): 26–31, June 1987. ISSN 0724-6722. URL http://dl.acm.org/citation.cfm?id=25596.25598.

Which is similar in spirit to the contested quote, and predates by a few years any of the references on Wikiquote.

kamaal
In the SICP lectures, they explain why names like 'Computer Science' come into being.

They explain it by explaining the origins of the word 'Geometry', which translates to 'Earth Measurement'. What they deduce is when a field is young, its hard to make a distinction between the science behind the field and instruments/tools you spend time with to make the science happen. That is because you spend so much time with the tools that the big part of the work is tool expertise itself.

Until we arrive at the perfect computers and perfect languages, we are going to be stuck with this sort of a phenomenon for a very long time.

jpfr
We had the perfect term early on: informatics

It is also the official term for computer science in many cultures. Mostly in European countries/languages.

StreamBright
And what about humans? You are rarely writing code for computers, but for humans. Many companies optimize code for readability. And there is a reason. Usually multiple people are working on the same code at the same time, or over time at least. It is the compiler's job to translate the code to computers language which can be imperative, I would still prefer the human side functional and declarative. Maybe it is just me.

I have struggled with the concepts of Java for a long time, especially with the many keywords that I cannot relate to, private, final, static, void while I understood majority of Clojure at the first glance. I could not write Java without an IDE but more than happy to develop Clojure with VI. It appears to me that we overcomplicate things for no good reason and underestimate the power of simplicity.

jtms
To be fair, almost no one can (or should) write java without an IDE. IntelliJ + ideaVIM has been a great compromise for me as a long long time VIM user. I know I am not responding to your main point (of which I am sympathetic) but I figured I'd throw it out there anyway
Retric
Education is there to teach you. I don’t think everyone will use SQL or ASM for example, but IMO every CS program needs to have at least one class covering each of them in depth for at least a few weeks.

Procedural code is even more central, and should be covered in depth.

ChristianGeek
“I don’t understand it, therefore it is inferior.”

I currently program for a living with Java. It clicked with me the moment I first learned it. (It was also my first introduction to OOP.) Right now I’m struggling to learn the nuances of functional programming with both Scala and Haskell. Do I consider functional programming, Scala, or Haskell inferior because they’re not as straightforward for me as Java (or C#, or C, or even ASM) was? Of course not.

vmchale
> so all functional code is arguably syntactic sugar

No. "Syntatic sugar" implies that it is relatively trivial 1-1 translation that could in principle go backwards. Functional programming has many, many more optimizations and transformations.

adw
A ton of problems in software engineering, I am convinced (and isn't wild generalization one of the marks of our field! I at least want to own my own hypocrisy here) are communication problems. Nearly all the interesting ones, anyway.

And one of them is axioms no-one ever communicates. I've worked with programmers with at least three markedly different axiomatic bases, for want of some less pretentious – and less exaggerated-for-rhetorical-effect – way of putting this:

* `Electrical engineers`. "Computers are imperative because computers are just circuits". Functions, objects, mathematical abstractions in general are fundamentally leaky and to be regarded with deep suspicion. Paradigmatic languages: C, asm.

* `Pure mathematicians`. Computers are abstract machines for manipulating symbols. Programming is set theory; hardware is an implementation detail. State is just an artifact of insufficient precognition. Paradigmatic languages: ML, Haskell.

* `Approximators`. These folk rarely come from computer science backgrounds; instead they tend to come from "technical computing", meaning sciences, engineering, economics, statistics and the like. They're also on the rise, because this is the group of people who "get" machine learning. Computers are fancy and very fast slide rules; their job is to run approximate numerical methods very fast. The only true types are `float` and `bool`; programming is translating linear algebra into algorithms which estimate the correct results stably with bounded error, ideally by calling the right bits of LAPACK. Paradigmatic language: historically, Fortran; these days, whatever array language is embedded in the libraries they're using, surrounded by Python or (if statistician) R.

The point is; these are three markedly different worldviews and none of them are any more fundamentally wrong than the others – they're all useful and wildly incomplete. So unless you can get agreement – or at least empathy – within your team, you're going to spend a lot of time talking past each other.

learc83
I like that categorization. One semester during college I was taking both a computer architecture class, and an automata/theory of computation class--I still remember when I realized that there were 2 fundamentally different ways of approaching computing. Studying the history of computing, you'll also come to a similar realization that there are 2 fundamental approaches.

Personally, I've always loved both paradigms, and I've never been able to decide which one I prefer. I tend to move between the 2 depending on which one works better for the task at hand.

That realization (along with the fundamentals required to understand both paradigms) was one of the most useful things I got out of my degree.

a-nikolaev
Yeah, interestingly, I have this impression that Python is a modern-day Fortran. (Largely for the reasons you describe in the third category of programmers.)
nostrademons
On some level that's literally true: when you run NumPy or SciPy code, most of your basic array manipulations are done by BLAS, which are a set of very tightly optimized linear algebra routines originally written in Fortran.
adw
A very large part of the answer to "why do people use Python?" is "why did physicists adopt Python en masse in the early 2000s?" and a large part of the answer to _that_ is "f2py exists"...
bigger_cheese
At least among my colleagues (process engineering) the language is basically an afterthought. Your slide rule analogy is right on.

For a lot of the people I work with code is just a way of representing mathematics. So you end up with "Fortran written in every language"

Here is a snippet of code (this is written in C but I guarantee most of my colleagues would write this the same regardless of it it were fortran, python, Matlab what ever).

           for (i=m1;i < m2; i++) 
           {                      //f(p)^2 = (u^T)(Q^T)(D^2)Qu and F(dF/dp) = (u^T)(Q^T)(D^2)Q(du/dp)
               g =h;
               h = (v[i+1]-v[i])/(x[i+1]-x[i]);
               g = h - g - r1[i-1]*r[i-1]-r2[i-2]*r[i-2];
              ...
           }
Here's some SQL

SELECT t2.Line, t2.Slope, t1.X, t1.Y, t1.Y - (t2.Slope * t1.X) as Intercept /* B = Y -Mx */

Retric
When you try and ignore the hardware you find out things like random bit errors occur. Thus pure symbol manipulation is unequivocally false.

Pretending a computer is somethings else can be a useful abstraction, but you can actually learn what’s going on.

shalabhc
So where do Smalltalkers and Lispers go?
minikomi
Outside in the car park playing hackey sack
_emacsomancer_
presumably in the "mathematical" group
azeirah
Mental health facility :(
MrEldritch
Statistically insignificant.
lispm
Part of the Lisp tradition was the 'knowledge engineer'. Identify the knowledge and processes necessary for problem solving and implement a machine for that: logic, rules, semantic networks, ... Bring the programming language near to the problem domain.
hathawsh
That's a very interesting categorization. I would generally place myself in the 'electrical engineers' category. I usually code in Python, Javascript, or Java, but I see those languages as essentially C or machine code with productivity enhancements like garbage collection, objects, closures, and lots of interoperable libraries.

OTOH, I would also place myself somewhat in the 'mathematician' category because I see imperative code as functional code where each instruction is a function that takes the current machine state and some parameters and computes a new machine state. The new machine state is an input for the next instruction. When you see it that way, instruction pipelines in the CPU become very easy to comprehend: the CPU calculates one or more possible future machine states while waiting for a response from DRAM, then selects the correct state based on the response.

Seeing it that way also reveals the main issue with imperative programming: the each instruction accepts too many inputs! Functional programming helps constrain the inputs so that it's easier to reason about what the code does and how it can change.

adw
It's a pile of massive overgeneralizations, not least of which is that people don't fall neatly into these categories. But it's a useful framework for thinking about miscommunication, I've found.
hathawsh
I agree. Next time I have a disagreement with another coder, I might ask them whether they see computers as primarily circuits, set theory implementations, or AI. (I might add "filing cabinet" as another category.) It would help me understand which assumptions I can probably make.
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.