Unrelated, but I noticed the podcast's volume control applies the volume change on mouse release of the volume slider. It prevents adjusting the volume with audio feedback and you have to guess where the slider knob should be.
Except that it is only local or small risk minimization; the catastrophic existential risks end up maximized to the point of becoming inevitable.
With companies, the MBAs happily optimize away anything that isn't immediate revenue. So manufacturing gets offshored, along with all the deep knowledge of how to make things. R&D gets shrunk and minimized because it is a quarterly expense with returns not immediately specified. They kill or sell off all the "dogs" of products and try to milk the "cash cows". This makes the numbers look great until things change and the "cash cows" need updating for a new market environment. But the company now has no idea how to do it because they have no seasoned and up to date product development, no R&D technology development, and no manufacturing know-how. So, it goes into a death spiral if it hasn't already been disrupted.
On the societal level, MBAs justified the western world outsourcing all the manufacturing because it was "fungible" and cheaper to do it offshore, made more profits this quarter. Meanwhile, all that know-how was stolen by China, who isn't interested in quarterly profits, but in global hegemony, and now has both economic and strategic chokeholds on the western democracies (everything from corners on lithium and solar panel markets, to spies in western corporations, to backdoors in key commercial and likely military microchips). The entire strategic advantage of the west was sold out for a few dozen quarters of increased profits. This will go down as a colossal strategic blunder of historic proportions.
>...all that know-how was stolen by China, who isn't interested in quarterly profits, but in global hegemony...
This seems a little simplistic? "China" isn't a monolith, and I'm pretty sure Chinese companies are just as interested in quarterly profits as Western ones. It's not like they're all state enterprises. And they didn't "steal" know-how - they earned it, by doing stuff (aside from the odd case of corporate espionage, which certainly cuts both ways). Nor is the United States any less interested in global hegemony than China.
I suggest that the reason for manufacturing moving to China is simply because labor is cheaper there (large population and weak labor laws). The Chinese government has played its hand well, but it doesn't represent some grand asymmetry in strategic planning - it's just the hand of the market as always.
Isn't it? If the companies all take orders from the state, and have state representatives on their boards, that seems pretty close.
> And they didn't "steal" know-how - they earned it, by doing stuff (aside from the odd case of corporate espionage, which certainly cuts both ways).
Espionage is asymmetric, and there are also cases of China seizing whole companies outright from foreign investors. Capital controls and controls on the ability of foreign companies to do business are very much asymmetric.
You do not need the whole china to do it. And you do not need the majority even. Just some and then recreate from parts to WeChat to TikTok … and it is one way street as well as not many steal anything back.
I still think the west is doomed. But both hk and Ukraine is a wake-up call. When they act very badly can you fight back or just be German. Too many dependence …
Good luck. You need it. We all need it. As only left thing is luck. I hope not. But your response …
> Nor is the United States any less interested in global hegemony than China.
In as far as United States is a democracy, it is not interested in global "hegemony". It is interested in Democracy spreading all over the world. Democracy is not hegemony but freedom for one person to have one vote and nobody being above the law. Of course that could change. A Nazi party could gain the power in US. Then you could say that US is interested in global hegemony for sure.
The US maintains the largest military in the world, despite the mainland being virtually unassailable by virtue of geography, and has overthrown numerous democratically elected governments while leaving brutal and regressive regimes intact. American financial interests are a much better predictor of foreign policy than "spreading democracy". As for nobody being above the law, that isn't even true in the US itself, never mind its satellite states.
Politicians love to assert as you do, but I'm afraid actions speak louder than words.
The simple fact of the matter is that if you want to live a self-determined life, or have a self-determining government, you must be better armed, better prepared, and better allied than the local bully or the autocracies.
If you are not better armed, prepared, & allied, the local bully WILL steal your lunch money every time, and your democracy WILL be overrun by an autocracy. Just speak to anyone with Baltic or Finnish heritage about their history with the Russians/Soviets/Russians.
The US just happens to be the largest democracy, so it leads. But if you think that the US could get away with a withdrawn isolationist policy because it is so "geographically unassailable", and let the other democracies fall, that would be a catastrophic failure. Hell, we are already under attack from inside with direct support from Russia.
And yes, no large organization will make every move without mistakes, even strategic errors. That does not mean that it is somehow equal, or even close to the autocracies (this will no longer be true if an autocratic party takes over US, which is a very real and present danger).
Please learn how the world really works before you spout simple-minded false equivalency.
The USA maintains military bases in all corners of the world. From Australia to Guam, Eastern Europe to Brazil. If they're not interested in hegemony and all they want is to spread democracy (sounds eerily similar to the jesuits in the 1500's trying to christen the savages) why would they do that?
If US is able to spread democracy all around the world, it will not be the "Hegemony of the US". It will be the "Hegemony of Democracy".
Consider Germany after WW2. Allied won over Germany but did US establish a "US Hegemony" in Germany? No. They established Democracy in Germany. Once you give people the freedom to self-govern, you can not have hegemony over them, unless you are a corrupt state. But assuming that US is a working democracy, corruption will be rooted out, because people will vote for its removal.
Right, only authoritarian governments are interested in conquering territory.
Russia is trying its best to annex Ukraine to itself. But it is not "Russia" that is doing it, it is the authoritarian government of Russia. When did US last annex territories?
The workhorse of authoritarian politics is tribalism. They try to say it is a competition between countries and or races and or religions. They get supporters by making them believe it is their tribe against other tribes that is threatening their very existence.
But the real fight is not between countries or "tribes". It is between Democracy and Autocracy, and that fight is happening in every country.
The simple fact is that, anyone who wants to live a self determined life, and have a self-determining country MUST be better armed, prepared, and allied than the local abusers and bullies or the global autocrats.
If you are not better armed, prepared, and/or allied, then the local bully will steal your lunch money every day, and the global autocrats will keep taking whatever they want, whenever they think they can get away with it. See China and Tibet, Hong Kong, "9-dashed line", Taiwan, etc. See Russia and Chechnya, georgia, Syria, Ukraine, Baltics, etc. Neither has EVER lived up to an agreement. They cannot be honestly bargained with.
They will not stop, unless they are stopped by an external power. That power must come from the democracies of the world. Without the US, EU, Australia, etc., being able to project power, we WILL soon find them on our doorsteps. Yes, the US has ended up being the 'worlds policeman' since WWII. This is literally what enables free trade (hint: the piracy problem off Africa kinda died once the US Navy started getting quietly involved, and that's just one example).
Please learn about how the world actually works before spouting false equivalence.
The entire strategic advantage of the west was sold out thanks to trying to standardize management? Come on, there's certainly an element of truth with relation to the downsides of globalization, but that's some pretty heavy hyperbole.
In terms of what is wrong with this apocalyptic view of the future, I would start with the idea that China has some unassailable chokehold on the West. A lot of influence over our supply chain (which is at the moment divesting), sure. Someone like Peter Zeihan can articulate it better than I can here (ex https://www.youtube.com/watch?v=QzT38jCUpgU), but China is really on its knees at the moment owing in large part to the instability of authoritarian regimes. We might see them get back on their feet in the next few years but we might just as easily see a catastrophic collapse of their government and economy. Time will tell.
It is not heavy hyperbole. Compare the relative positions of the west vs China only 25 years ago vs today. Before, China had neither military technology to that could challenge the West, nor the size of its economy. Today, it has ripped-off designs of fighter jets for everything from the F-16 to the F-35, carrier-killer missiles, six nuclear attack submarines, a domestically produced aircraft carrier, etc.++, and is the 2nd largest economy in the world. All in a quarter of a century.
We gave away the forking store. And CCP has used that new position to violate it's agreements on Hong Kong, and now continuously and explicitly threaten war on Taiwan.
The great experiment has been tried and resoundingly failed. It was thought that open trade and information exchange would result in openness and democracies in the autocratic countries. Instead, it empowered and emboldened the dictators.
That said, we have noticed, and China has blown it so that they no longer have the benefit of the doubt; we know the experiment failed, and know their absolute intentions to remain autocratic, and grow at the expense of democracies.
So we are now strategically pulling back. I do not think it is anywhere near over. I would not bet on China winning long-term.
But, the west, by trying the Great Experiment (which was truly generous-minded), without restraint (which was stupid) has put ourselves in a far more vulnerable position than we would have otherwise. It will be far more costly and risky to get the democracies of the world out of this mess than it would be if we'd avoided it in the first place
US needs China and China needs US. The Wealth of Nations. The only problem as you say is the authoritarian regime in China which opposes progress for its people, because the authoritarians have taken over so of course they look after their own interests not the interests of the majority. Therefore progress is coming very slowly to the masses. China used to be ruled by a "junta" but now it seems it is ruled by a dictator, a single person. He might be a good person, or evil, and so will his follower. At some point there will be a very evil dictator just like in ancient Rome.
It's more accurate to say: "China needs the cooperation of the US and EU in order to sustain its growth. But with enough effort, the US and EU can find a replacement for China."
You can relocate supply but you can't relocate demand. The West can find new partners to buy stuff from, like India and Vietnam, and is doing so now. If it really wants to it can pump money into those countries to speed up the process. World leaders are now acutely aware of China's political instability - I think no matter what happens, the era of China being the whole world's indispensable supplier is over, no one is really comfortable with it anymore. Between Covid and Jinping, China has only itself to blame.
The main thing that gives me hope is that the type of knowledge generation environment in the western world still hasn’t been acquired by dictatorial powers like China. Given the exponentially increasing technological change we’re experiencing, the knowledge transferred via the method you described will likely be surpassed and rendered obsolete relatively quickly (hopefully) and make that transferred knowledge less likely to undo the progress made towards global order and uplift.
Because China is experiencing all kinds of internal demographic and political turmoil, and they’re still much further behind than people realize, what I worry about more in the long run is decay of western education than anything externally driven. We are our own worst enemy, and we risk hollowing out what has historically been the engine of innovation for the majority of the world (I am talking primarily about the pipeline feeding people into US University systems since World War II and high level technical/tinkering environments, as I think European education seems to have lost a decent amount of innovative capability a while ago, though I know much less about it). That problem is deeper than the problem of products of that system being sold off as you describe, but related. That poses a far bigger long term risk to domestic and global prosperity than China, imo, although China is still a massive risk for at least the next decade if Xi continues his current trajectory, and the education problem is of course related to our ability to confront that.
I see two extremely important problems that need to be solved:
1. The most intelligent and competent members of the public need to be pulled from all backgrounds, colors and creeds and be rigorously evaluated purely for ability and merit regardless of all other criteria. Historically there has been bias in favor of the host nation population. Now there is bias in favor of restorative education and nepotism. While truly restorative education has merit, it cannot interfere with attempts to identify and nurture the best of the best, which is of upmost national security importance, and is important for the maintenance and improvement of the machinery which has given us so much material prosperity.
2. We need to accept that not everyone is able to handle the complexity of the world we have created and educate those who are not in that first highly selective pool of people how to be good citizens, participate in complex systems in a way that’s beneficial for all parties despite the complexity, achieve high status, and create a compact that actually rewards them for being good citizens regardless of intellectual ability. There is a very toxic aspect in the upper echelons of modern society where those not able to handle the insane complexity and competitiveness of technocracy are essentially lied to by most of our educational system to continue schooling and training indefinitely, or they are ousted from the training environment and treated like livestock. Many people then end up clinging on to large bureaucratic systems for meaning and survival, which becomes further and further out of sync with real societal needs. This is a much harder problem to solve than the first given modern organizational complexity, but people will not accept the solution to the first problem (which is inevitably unbalanced and discriminatory against those less able) unless the system as a whole is explicitly rewarding and accepting everyone for participation and giving them achievable pathways to high status.
I see that second problem as THE problem of the 21st century. If that problem is not solved, or if it is forcibly “solved” arrogantly and stupidly, I worry the world will be plunged into a level of dysfunction that makes the past 3 years look like nirvana. If that problem is solved we could find ourselves enjoying the highest levels of prosperity, meaning, belonging, and global uplift in the history of mankind in the decades to come.
You are completely correct if the meritocracy fails to provide a fair allocation of resources.
The way it is going, the "merit" scale equates merit with ability to gain control of money. Honesty doesn't even matter; there are many more Madoffs and SBFs of the world that haven't yet gotten caught, and everyone hails them (at least until they fall). Madoff died in jail, but only after he lived decades at the top; some would see that as a good life bargain career plan.
If we build a real meritocracy AND provide for equitable distribution of resources to even the least among us, we can do very well.
There is a lot of movement in that direction, but until the abusers at the top get the point that there will be no society even worth living above without equity, we could be headed for real darkness.
Aiming towards a legitimate meritocracy is the least dystopian aim possible.
Aiming at anything else will simply be a less effective, more corrupt, more unjust, more arrogantly constructed system.
True meritocracy requires aiming away from corruption, towards justice, and towards humility. That will never be perfectly achieved, but the lack of willingness to even aim at the target in large sectors of society at the moment is something I find both incredibly depressing and incredibly counterproductive strategically. There are tools and procedures that can be systematically employed to measure merit that, while imperfect, are better than nothing, and seem to be increasingly rejected because people don’t like the results. Those results are not something that should be considered static nor perfectly calibrated to merit nor reflective of worth as a high status member of society. But the incredible amount of sensitivity around any attempt at identifying and nurturing people with signs of high potential regardless of background is a giant achilles heel affecting all of society. Failing to properly address and assuage those sensitivities so we can pursue meritocracy to the extent possible harms everyone in the long run.
“Blessed self appointed teachers” should not and cannot exist in any functional actually meritocratic system. In order for mass civic education to be effective there needs to be a core set of criteria for what constitutes a good teacher and a good citizen that all parties negotiate on and largely agree on (to the extent possible) that’s actually (not just superficially) bottom up and not imposed by social engineering types.
The exact criteria for what constitutes a good citizen can and should be varied based on location and can and should be driven by the values of any given local community. But there also needs to be a convergence on higher order values that can help negotiate between communities that have different values. A good education system needs to incorporate local community values while also fitting into a larger system.
That is incredibly difficult to achieve, and impossible to fully realize. But that should be the aim. What we have now is based off an old Prussian industrial model that is serving fewer and fewer people while also increasing disunity by preaching grievance. The education system could be much much better and both serve and be run by everyone in a much more unifying and cooperative way if there was better leadership, less inertia in the old system, and a much clearer and beneficial social contract at the foundation of the system.
> That is incredibly difficult to achieve, and impossible to fully realize. But that should be the aim.
What makes you confident that not fully realized version of your utopia does not end up being a dystopia? For reference, see what happened for the not fully realized utopia of communism in Soviet Union
Every functional organization throughout history has been some form of meritocratic system. Functional does not guarantee benevolence/a meritocratic system is neither utopian nor dystopian in and of itself, but some attempt at meritocracy is a prerequisite to being able to aim a system at all and keep it from causing harm through dysfunction.
Whether or not more local emphasis is better when talking about civic education is a much longer conversation. Determining what proper civic education should be is an extremely difficult balancing act. That difficulty and the long term ramifications of education is why I think it's likely to be THE most important problem in upcoming decades. In order to avoid dystopian outcomes that difficulty needs to be confronted honestly and pragmatically. Systems become most dysfunctional and dystopian when they pursue a vision without regard for pragmatic considerations and are willing to become increasingly "in debt" to a vision by doing things that are very bad in the short term to medium term for an imagined, non existent benefit in the long term. I think it's perfectly possible to aim a system in pragmatic directions towards actually viable paths to betterment which balance short, medium, and long term considerations without the type of indebtedness to vision that leads to bad outcomes. Those paths won't be perfectly defined, and the benefits might end up being modest. In short I don't think I'm advocating for anything utopian, just intentional, pragmatic, and in service of whatever best paths make themselves visible. I think we are not looking at paths which are much better than the one public education is currently on. Exactly how much better civic education could be is a big unknown, but my hope is that it could be drastically improved.
Indeed. This gives rise to the triumph of robotomorphism over anthropomorphism, i.e. HR as human robots.
It's easier to hire humans that act like robots than to design robots that act like humans. Robots have an uncanny valley described on wikipedia, that, when reversed, leaves us with a robot-centric human critique:
In aesthetics, the uncanny valley is a hypothesized relation between a human's degree of resemblance to a robot and the emotional response to the human. The concept suggests that humans that imperfectly resemble actual robots provoke uncanny or strangely familiar feelings of uneasiness and revulsion in observers. "Valley" denotes a dip in the human observer's affinity for the robotic human, a relation that otherwise increases with the human's robot likeness.
No. Valve is privately owned, with GabeN having a big share (AFAIK, since it's private, we wouldn't easily know who/what owns each % of Valve)
You might hear about Valve in these kind of discussion not because of its ownership model but because of its "flat hierarchy", the philosophy of "no boss". But obviously, humans always organise in hierarchies, companies like Valve simply have no formal ones, you just have to navigate well social/politically to understand who you need to please.
Also, a single person can't usually deliver a big project in a reasonable timescale, so how do you get a team?
There's always a lot of more politics (arguing around, convincing others) involved in flat-hierarchies or co-ops, but they are much less likely to get a power hungry vulture dictating things.
Same. The question I’ve been finding more and more interesting over the years is “how do you tell when you have an interchangeable component/what are the implicit dependencies holding things together”.
You run into an abstracted version of that in software all the time, but it’s way more ubiquitous that just a software or business problem. There’s always a limit to interchangeability/some kind of assumed context. That rabbit hole is incredibly deep. I don’t think I’ve ever run into anything comparable. If you pull on that thread long enough you end up in deep esoteric discussions about perception and philosophy and the history of categories/evolution. And whether or not all of that rabbit hole I ended up in is an unrelated tangent or not requires dealing with the same exact problem (which is whether or not the boundaries you’ve placed around something are legitimate/fit your definition, or if they belong somewhere else/you’re missing very important hidden context).
The franchise restaurant world has tried to develop management processes that allow people to be plugged into them like interchangeable components. McDonalds is probably the most prominent example. The system works fairly well.
I've often thought the same. I've recently been reading "Behemoth: A History of the Factory and the Making of the Modern World" it's not only the history of the factory but the history of Labour with relation to capital and industry, I'm only up to Henry Ford at the moment, but it's a good read and puts these questions in a different light. In short its the money, these systems are designed to minimise risk, worker happiness is not a priority, any conditions are only gained by worker actions of some sort.
Because capitalism incentivizes maximizing capital flow to the owner of the organization. Making labor interchangeable and delivering a minimally viable end product are both ways of minimizing capital that could otherwise be made to flow to the owner of the organization.
If you want to create a management style that optimizes for quality of product or optimizes for management styles that reflect the needs of labor, instead of a process that optimizes for minimizing labor, then you need to change the incentivization plan accordingly.
My mentor worked at JPL - he was a BS, MS, PhD from MIT - he got out of there so quick (i believe less than 2 years). He said the pace of work was so slow, he didn't feel like anything got done in his entire time there.
Depends on what you're working on and on if you're defining "getting stuff done" on a macro or micro scale. For large flagship missions, 2 years is nothing. The scale of work and level of verification required by these projects is massive and takes time. You've only got one shot to get things right with billions of dollars on the line, don't rush it.
High-assurance isn't really the culprit. It's more of a funding issue. If long NASA programs could save part of the first year's budget to use the second year, things would go a lot faster.
You might get hired into an overstaffed team and not do that much for a year until it's crunch time and then you're underfunded so it takes an extra year. Arguably you spent three years to achieve one year's worth of accomplishments.
It's pretty inherent in those types of projects. How long do big aerospace projects take? (And then they may be canceled when some other bidder wins the deal.)
Big hardware development has probably accelerated some overall--the development of new chip architectures and "big iron" computer systems was at least multiple years when I was a product manager, but especially safety critical systems or things you get one shot at still take slow deliberate process.
SpaceX is currently set to revolutionize the entire space industry (again) with Starship. The abstract idea of Starship was only first presented at a conference in 2017. By 2020 they had developed and were doing individual component flight tests. They just had their first fully fueled test of the entire integrated system yesterday.
Assuming they keep at pace, there will imminently be a static fire and it will be headed to space this year. And to emphasize it's not some evolutionary thing but a complete rethinking of rocket design, unlike anything before, that stands to once again completely revolutionize the industry should it succeed.
Going from spitballing to revolutionizing space, in 6 years (2 of those years being during a highly disruptive pandemic), on a private budget? I'm inclined to say the problem is government, but I'm almost wondering if that isn't just a knee-jerk. It just seems that in modern time that many "businesses", government included, just don't really have the capacity to move quickly - even when it's 100% possible.
Heard that here (?) how a senator insisted a project on reusing the components of an old programme and then become the administrator of that project. Hence overall you can say we have 40 years idea for the new moon landing project. Great for retirement.
This is a bit off topic but your comment reminded me of something I learned only comparatively recently: while these degrees are things we earn one at a time, they are sequential and the result is that your status (the degree to which you have climbed the academic ladder) does not accumulate, it only changes.
So at one point in time your mentor would have held the status / degree / title of Bachelor of Science. They were then promoted to Master, and then to Doctor.
That’s probably what you’re saying, and this isn’t really a correction. It’s simply something interesting that I wanted to share with everyone else.
Was Lisp actually ever risen at JPL, or was it just used on these handful of projects by a single or small group of developers?
> At one point an attempt was made to port one part of the system (the planner) to C++. This attempt had to be abandoned after a year. Based on this experience I think it's safe to say that if not for Lisp the Remote Agent would have failed.
I like Lisp and am certainly no fan of the complexity of C++, and barely know it because of such, but is this the conclusion that should be taken? Porting an existing and in-work application from one language to another, with the two languages being rather wildly different, seems like trouble from the start. It doesn't address, however, the idea of starting from the beginning with another language. The conclusion quoted seems to skip over the possibility of using another language from the start.
I'm going to straw man a bit, but it feels like lisp enthusiasts are desperate to find stories of commercial software being written in lisp.
I'm sure there's many examples I haven't heard of that are legitimate (using lisp for almost all of a specific domain at scale and having a mature codebase, tho that's arguable) but I've yet to see any example that has meat on it's bones, besides the company that did crash bandicoot on the PS1.
And even that one I'm unsure of as I feel it's been mythologized with how often it's pointed to.
I don't blame them though, I root for lisp very much. I'd rather see this articles like this posted and picked apart for validity vs not hearing about them at all.
> besides the company that did crash bandicoot on the PS1. And even that one I'm unsure of as I feel it's been mythologized with how often it's pointed to.
Well, good instincts, because it wasn't. Naughty Dog used Lisp to write a compiler for an s-expression based C-like language called GOAL (it was kind of a macro-assembler with a register allocator). The reason for using s-expressions was to get a syntactic macro system at compile time.
The game engine was written in this GOAL language. There was no list processing (car, cdr) or garbage collection happening on a PS2, only during compilation in macro-expansion.
I know less about GOOL, but the following seems to imply that GOOL was probably also a s-expr based non-Lisp language:
> As a language GOOL borrows its syntax and basic forms from Common Lisp. It has all of Lisp’s basic expression, arithmetic, bookkeeping, and flow of control operators.
Note that Andy explicitly only mentions things like syntax and arithmetic, but not any kind of list processing. Again, I am not sure, but I wouldn't be surprised if GOOL isn't a Lisp either, but only a Lisp-flavoured DSL for event-based AI code.
What would be the defining characteristics of Lisps, that separate them from other languages? Surely it can't be simply s-exps, since that is pretty much all GOAL and Lisps share. Would a C written in the form of s-exps qualify as a Lisp?
Io and Rebol both have homo-iconicity and a macro system. Would you call them a Lisp? No wonder Lisp is the most powerful language in the world, when all powerful languages are secretly Lisp – without even knowing it!!!
I mean, the name "Lisp" is short for "List Processing", it's disingenuous to claim that Lisp doesn't need runtime List Processing.
Lisp needs list processing. But it's much more. Lisp has several kinds of numbers, symbols, strings, characters, vectors, arrays, hash tables, records, classes, objects, natively compiled functions, ... -> and all those are not made of lists. An array in Lisp is not made of lists underneath. A natively compiled function is not a list, it's a vector of machine code.
Putting aside the sarcasm, don't mistake Lisp from 1960 with what came after.
"First released in 1997, Rebol was designed over a 20-year period by Carl Sassenrath, the architect and primary developer of AmigaOS, based on his study of denotational semantics and using concepts from the programming languages Lisp, Forth, Logo, and Self. "
"Io is a pure object-oriented programming language inspired by Smalltalk, Self, Lua, Lisp, Act1, and NewtonScript. Io has a prototype-based object model similar to the ones in Self and NewtonScript, eliminating the distinction between instance and class. Like Smalltalk, everything is an object and it uses dynamic typing. Like Lisp, programs are just data trees. Io uses actors for concurrency. "
Exercise for the reader how much from e.g. Common Lisp might be found in IO or Rebol.
I've read this article some years ago and always wanted to ask: besides size, what were the arguments against Lisp in JPL? Author's position is biased, as he recognizes, then I'm curious about the other side in this story.
One of the footnotes mentions that multilanguage integration was causing some headaches, and (the author's bias showing through) that is blamed by the author on the IPC server being crashy because it was C, not Lisp.
Since this essay cycles here on HN perennially and inevitably people are curious about how your views about lisp, or other languages have changed, you might want to have a follow up page that you forever keep on updating as your views change? Maybe linked from the original article?
Year 2033 - lisp is still a great language for $xyz
Year 2043 - I was wrong, I should have used COBOL all along, how was I so blind?
Yes, a lot. In fact, for the last 5-6 years I have been working to help develop and maintain a tool for VLSI chip design written in Common Lisp. (Unfortunately, the company that built it was recently acquired by a large publicly traded corporation, and they have decided to kill the project. Plus ca change, plus c'est la meme chose.)
I have also done a lot of personal coding in CL. Most recently I wrote a spam filter that I use in production on my personal mail server.
I’m intrigued by the distinction between best practice and standard practice. Isn’t it standard because it’s the best? I can understand that non-standard practices might be best in niche situations, but this sounds like a discussion of “traditions are solutions to problems so old we’ve forgotten what they were”.
My read is that a standard industry practice (e.g. considering programmers interchangeable pieces and that components should be designed with that in mind) will work in many situations, and therefore the "standard" is reasonable for most of them, but the best practice is context-sensitive; in this case, the context didn't require C++/Java and they were actually not good choices. This was specialized, mission-critical software developed on a tight schedule and swapping programmers in/out of the project simply wouldn't work, so many considerations for choosing (say) Java were out of the question.
There is not such thing as a best practice without specifying a context. The kind of build hygiene required for long-lived commercial product is very different from disposable marketing content. This is especially toxic when the underlying context and goals are so different.
Businesses wants interchangeable programmers for many reasons: so they can ramp team size up and down, so hiring is easy, etc. But the number one reason in my experience is to minimize the bargaining power of the programmers. Execs absolutely _hate_ having star programmers holding them up for raises. Building complicated in-house tooling in a language like lisp might make it rain, but the devs are then in a position to demand a slice. Procurement 101 is always have secondary vendors to keep your main vendors honest. Lower productivity is an acceptable price to avoid being held up.
This is a very different context from somewhere like JPL where any programmer there could already leave for more money in industry. Anyone there is already committed to the mission. The cost of lower productivity isn't justified, and might make certain activities simply impossible given the current budgets.
> There is not such thing as a best practice without specifying a context.
Just to be clear: we are in agreement, right? The author of TFA is stating this, and that the JPL misunderstood its own context. It wasn't the kind of business/project where your second paragraph applies, and so the practices they adopted were mistmatched.
I felt this way through my entire engineering life until I bought a sailboat.
Everything on marine stuff tends to be flat-head or similar design -- engaging a flat driver into a slot in a bumpy/roll-y ocean is hell. The self-centering engagement aspect of a cross-socket style drive is absolutely fantastic after a long day of trying to tighten flat-slot fastened tube-clamps and the like at sea during calm conditions, let alone during weather.
That said -- yes, phillips head sucks, they round out and won't handle any decent amount of torque over repeated sessions without deformation. Robertson (square)/allen/torx kind of suck at sea, because they lack any form of self-centering without using special taper drivers.
In a perfect world I think that I would like to use pozi-driv fasteners and drivers everywhere, but the reality is that there are so many types of specialty hardware in use in the marine-world that it would be difficult -- if not impossible -- to switch it all entirely over.
Everything on marine stuff tends to be flat-head or similar design
-- engaging a flat driver into a slot in a bumpy/roll-y ocean is
Off topic: the hell of a bumpy/roll-y ocean manually controlled lift crane arm operation is improved by a single rotating supporting pillar with an expandable webb-y forklift mechanism that goes in the water and retrieves the dive vessel with a one button push and Ai selfguiding.
I wonder if the prevalence of flat head screws has something to do with ease of machinability. I imagine there's a lot of custom or obscure fasteners on boats. A flat head bolt or screw has the benefit of being able to be made with not much more than a lathe.
I am always surprised when I read this kind of article. In my mind a good software developer will figure out how to be productive and get things done in any programming language.
I have often worked with programming languages, editors, operating systems etc. that weren’t my #1 choice. However I have usually been happy and productive, using code generation, editor tools or whatever to automate the things I don’t like.
You love Lisp and hate Java? Great! Use it to generate the Java code you don’t want to write. And be smart. Don’t tell management you are using Lisp. If they ask why you are 10x more productive than anybody else, just say you are using a code automation tool to be more productive. Which is true! Or simply shrug your shoulders, and humbly say ”years of experience” :)
The developers I admire and want to hire are the ones that can be thrown into a shitty disaster area where everything is broken, and all the tools are unfamiliar and crappy, and still somehow manage to turn it around and fix it. They are f*** heroes. The software developer equivalent of the Navy Seals.
If you don't like Java and generate your way around it, that's not a genuine measure of productivity in Java.
Just like using a HLL (which generates machine language) isn't a measure of machine language productivity.
Whatever approach you settle on is a language. E.g "Bob's Lisp-based Java Generator" is a language. It's neither Java, nor Lisp. That language helps give Bob a certain productivity.
What Bob is doing represents the vast minority, and could create a maintenance problem for the organization, and its army of people who know only Java. Code generation isn't a practical recipe by which every programmer everywhere can hit productivity in any language.
The productivity I am talking about is $ generated per time/cost. Not what you call “language productivity”.
For example, you will be more productive using C than (say) Python if your Python solution requires more servers to run than your C solution. Even if it took longer to write the C solution. Because the long term cost of the servers will be higher than the development cost. Which is why Google and other companies often rewrite code that alreadyworks to C++.
Another example is that you will be more productive using C# and the Unity game engine to write a commercial game than (say) Python. Because the Unity engine takes care of all the hard stuff you would otherwise have to write yourself in Python.
And you will be infinitely more productive using C to write a hard real time kernel than Python. Since Python is too slow unless you pretty much spend all your time calling C code to make it fast enough. Heh I can just imagine somebody trying to convince Linus to rewrite the Linux kernel in Python to be more “language productive”. That would be a very short conversation :)
So your comments might be true or not but that’s besides the point. I simply don’t care about “language productivity”. I care about real measurable productivity. The kind of productivity that the CEO of a company recognises.
That's a tricky one. If the product didn't sell, but the developers earned good salaries, was there productivity?
If a widely used, free program received $1500 in donations over fifteen years, is that $100/y productivity?
If I write the code and get paid, does it matter whether nobody runs it after that? Or whether it runs a lot without me getting paid more?
Basically, given $/t, whose $ and whose t are we talking about? Should these parameters not be constrained to be those of the same person or people? Or is it okay to divide the money made by some users of the software downstream, by the development time?
Yep productivity is a complicated tricky subject. And yes if your work generates zero $ then your productivity is zero by my definition. Or actually negative if you spent money doing it. This is because I am thinking like a business person.
However you can use other definitions of productivity of course. You could define it to be issues fixed per month for example. However that is even more tricky. Because different issues require different skills and effort. And to make it even more complicated, an issue that is easy for Ada to fix might be hard for Bob. Simply because Ada have fixed similar issues before. So Ada using C++ might be way more productive than Bob using Python. But that doesn’t prove that C++ is more productive than Python of course.
Which is why studies using the same people to solve the same problem twice using 2 different languages to “prove” that language A is more productive than language B is non-scientific BS. Obviously solving the same problem the second time gives you a huge advantage. No matter the language you use. And obviously the people involved will know one of the languages better than the other. Which again makes the whole experiment non-scientific BS.
So yes productivity is a complicated slippery tricky thing to nail down. We all think we know what it means but the more objective/precise we try to be the more slippery and complicated it gets :)
I used to be a fan of programmer productivity and having the best tools to express ideas in. In our case, this is not about LISP, but about being able to write 40 lines of SQL in half an hour to solve things that other people reach for 1000 lines of backend language to do. But also trying hard to identify opportunities to simplify and delete code instead of adding another layer of complexity on top every time something doesn't add up.
But I've started to wonder. In some ways I feel "our" area in the company has been a victim of our own success: By doing things in smart/efficient ways, the tasks we've done has become seen as simple, we didn't need to recruit as much, and eventually because of the low number of people occupied on the task in the organization it is seen as easier or less important and we are more vulnerable to churn than larger teams.
I've seen simpler problems getting solved in (what I think is) very inefficient ways and I used to scoff at that. But now I think what they gain is a large robust team, which can handle both churn and have more weight politically in the organization. Solving something inefficiently simply offers a way to have more people do the same task, for redundancy.
In a sense, once an organization hits a certain size it seems that efficiency in a programming language is an anti-feature. You WANT a larger set of programmers to be happily working on something just to have a larger team around the code and redundancy in people. If a programming language / environment allows you to be too efficient, then too few people get the job done and you become vulnerable.
At least, that's the best explanation I have at what's really going on in the industry.
This sounds like the standard "but are they busy?" concern hitting. Folks claim they want folks that are working "smarter" and not "harder." Completely ignoring that that is, itself, hard.
I agree with your general concern. I hate it when, not only do I know that I am in fact lazy, but that is not why I have done less in this case. I genuinely believe the smaller solution is far far preferable. All too often "don't do it" is by far the better option for what so much internal tooling does.
To that end, I push back against you wanting a less efficient language. I think efficiency is itself a trap. Redundancy and general pacing is a good thing. Languages that try and forego that are not doing themselves any favors.
But, and this is fully to your point, if you don't have the spots in your organization that are fully tolerant to mistakes and bad ideas, than you don't have room in your organization to grow. And if you don't have room to grow as an organization, than you probably don't have room to grow the software, either.
You see related issues with operations too. I was contracted to a company using WebSphere, and they had 4 people (including myself) doing very little except tediously deploying applications to the clusters.
So I took a chunk of my downtime and wrote a deployment engine, to eliminate some of the common mistakes, and ideally make things much faster/predictable.
In hindsight I can see why the team didn't like the idea: in addition to the new Perl code that someone would have to support, it was a serious risk to their jobs. If deployments that previously took 2-3 hours of tedium were reduced to 20 minutes of scripting, why would they need 4 people to do it?
Nowhere near the same situation but the same spirit, a lot of computational science still uses fortran (f90, thankfully for more modern codes) and has resisted the push to use C++ like everyone else. Fortran still is the right tool for numerical software, so it's good we avoided the bandwagon.
A relative of mine had a huge fight at work a few years ago with younger management who wanted to re-write the core of their simulation software in Java, instead of maintaining the Fortran code. I trust his judgment when he said it would be a bad idea.
(where "advanced" means: a paradigm that results in greater economy of code per unit of work done, easier debugging, easier testing, complete elimination of entire classes of bug (such as in an immutable language), easier maintenance etc...)