Los Alamos Chess

(en.wikipedia.org)

265 points | by nanna 12 days ago

22 comments

  • keid 12 days ago
    I knew one of the authors, Mark Wells. After he retired from Los Alamos National Laboratory he moved, temporarily, to Las Cruces, NM, to be head of the CS department at New Mexico State University, so became my boss as a student employee in the department. I don't recall his authorship of the chess program being mentioned by any of the profs or grad students (it was a very small department). When I moved to Los Alamos many years later I discovered that he lived just down the street--they'd kept their house in Los Alamos while in Las Cruces and had returned (for the skiing, he said).

    The first article linked below has some details about the chess program not in the Wikipedia article.

    Chess was his (serious) hobby; his research was in programming language design, specifically concerning type theory.

    https://ladailypost.com/former-lanl-mathematician-computer-s...

    https://ladailypost.com/obituary-mark-brimhall-wells-oct-7-2...

    • chemeril 12 days ago
      I'd keep a house for the skiing at Pajarito too! For those reading along it's certainly worth the trip if you're in the area.

      I have a vague recollection of coming across a physical 6x6 chessboard somewhere on lab property and found it a little odd, but never knew it had ties to MANIAC. Lots of history floating around in that place.

    • QuantumSeed 11 days ago
      I was at NMSU during his time as head of the CS department. His interest in chess inspired to write my first chess playing program
  • jhbadger 12 days ago
    I'm not sure if anyone actually plays this as an actual chess variant, but it is fun to write a chess player for this, mirroring the 1956 version. I remember reading about this in the 1980s and wrote a version in BASIC that could beat me every time (maybe not that impressive objectively, but it impressed me).
    • huytersd 12 days ago
      I hate sudoku variant videos that rope me in with some impossible scenario and then explain a totally different set of rules.
      • alhirzel 12 days ago
        Then you will hate this: https://www.youtube.com/watch?v=yKf9aUIxdb4

        I love it though, his reactions are priceless.

        • huytersd 11 days ago
          It was this exact person that made me hate it. Oh look an empty puzzle, except you can’t have consecutive numbers, the 2s can’t be in the corners and every alternate cell is the sum of the previous two. It’s stupid.
          • smrq 11 days ago
            What did you expect? It's obviously unsolvable under standard rules--it's proven that a classic Sudoku requires a minimum of 17 given digits, and even if you didn't know that, it's fairly obvious with a little thought that 8 or 9 or so would be a lower bound. What makes this particular puzzle surprising is that any relatively short rule set would make this grid uniquely solvable.

            Note "relatively short". This channel gets lots of "miracle" puzzle submissions that they reject for the very reasons you claim: arbitrary, lengthy rules for the sake of making an empty grid. (Occasionally they let through some in-jokey rules about 3s in the corner or something like that, which to be honest irks me a bit as well.)

            • huytersd 11 days ago
              I expect it not to be called sudoku at that point. Call it what it really is- a nonomino or kakuro or whatever.
              • cedilla 11 days ago
                Why is it okay to call the game a nonomino or kakuro, when it's entirely unlike either game, but so obscene to call it a sudoku, the rules of which it includes?

                And it's not called a sudoku, by the way. It's called a sudoku variant. And yes, with classic sudoku requiring 17 clues, you just have to add additional hints to make anything with 16 clues or less solvable. And a lot of variant puzzles are still a lot harder than anything you'd find in a newspaper.

                There are lots of interesting patterns and strategies emerging from variants that scratch just the same logical itch as sudoku does. And sometimes ideas from variants get picked up by setters for classic sudokus, like the Phistomefel ring, which was popularized by a setter who mainly sets sudoku variant puzzles.

                • huytersd 11 days ago
                  The linked video is very much a nonomino.
              • mtlmtlmtlmtl 11 days ago
                They only call it a sudoku when it is a sudoku. I.e sudoku rules apply. There are additional constraints, but that doesn't change the fact that it's a sudoku. Los Alamos chess or crazyhouse chess is still chess, isn't it? Chess is really a family of games, and the same is true of sudoku.
                • huytersd 11 days ago
                  No, los alamos chess is los alamos chess. No one refers to it as just chess. The furthest someone might go would be chess with or without castling. There’s pai gow poker and Texas holdem poker and they’re referred to as such because they’re games with different rules. I don’t particularly care to die on this sudoku hill but it bothers me enough to say something.
                  • lanternfish 11 days ago
                    People very, very frequently refer to holdem as just poker. I've also heard five cars draw referred to as just poker. If you asked someone playing PLO what they're playing, a lot of people just say 'poker' and then elaborate if it becomes necessary.
                    • huytersd 11 days ago
                      That’s because it’s by far the most popular form. No one is going to call pai gow just poker.
          • lanternfish 11 days ago
            Genuinely curious - what makes it stupid?
            • prerok 11 days ago
              Not the OP, but I think they mean that the change to the rules is done to reduce the search tree a program should traverse but makes it difficult for a human to reason about. We search for solutions heuristically and because of the added rules must back track a lot more, which makes it tiring.

              So, the change makes it easier for a computer but not for a human.

            • huytersd 11 days ago
              The thumbnail implies there is a way to solve a seemingly impossible puzzle. But it’s really not solvable using traditional rules, you have to come up with weird one off rules to solve it. At that point is it really sudoku or just a sudoku shaped different game. It’s like playing checkers with chess pieces.
            • metadat 11 days ago
              The rules become flexible and abnormal for the sole purpose of supporting someone's agenda. Maybe enjoyable or appreciable in a meta sense, but from a game mechanic PoV it makes strategies meaningless.
              • twiceaday 11 days ago
                Part of the point of variant sudoku is discovering / inventing brand new interactions you've never seen before. And crowd-sourcing votes on whether that discovery process is fun. You are basically getting introduced to a new set of axioms and have to discover and prove theorems.
              • cedilla 11 days ago
                What agenda is that?
                • huytersd 11 days ago
                  Agenda to increase views I assume
    • passwordoops 12 days ago
      For some reason your comment made me think of the classic Simpsons bit, "When I stamp your foot and say 'Hello Mister Thompson' you say hi back." Funny how the human mind works
  • dvh 12 days ago
    Kruskal (author of the kruskal algorithm) won.

    edit: pardon, wrong Kruskal

    Joseph Kruskal had two notable brothers, Martin David Kruskal, co-inventor of solitons, and William Kruskal, who developed the Kruskal–Wallis one-way analysis of variance. One of Joseph Kruskal's nephews is notable computer scientist and professor Clyde Kruskal.

  • frankfrank13 12 days ago
    Found an example to play online with a bot! https://www.chessvariants.org/play/jocly/los-alamos-chess
    • procgen 12 days ago
      I'd love to play against the original algorithm.
    • Stevvo 10 days ago
      Doesn't follow the rules; enemy pawn promoted to queen upon reaching other side of board.
  • davidw 12 days ago
    > The computer still needed about 20 minutes between moves.

    Wow. Coding and debugging that is... a whole other level of difficulty compared to coding these days.

    • pacaro 12 days ago
      My mother describes laying out the program flow on index cards (or recycled punch cards depending on what was available) on the living room floor and walking through the program/room with a notebook tracking the program state.

      To her that was debugging

      This was mid to late 60s

      • davidw 12 days ago
        That's pretty cool! You should get her to write it up so it's not forgotten.
      • huytersd 11 days ago
        The programs were so simple back then with almost no abstraction. I can’t even imagine laying out punch cards for a basic web dev project now. It might cover the earth or something.
    • tombert 12 days ago
      Every time I read about about how slow really old computers were, I say an internal "thank you" to Bell Labs for the transistor. I'll occasionally see a YouTube video about vacuum tube or relay based computers, and they'll show their operations on the order of 3-10 operations per second. And here I am writing this comment with something whose operations are on the order of billions per second.
      • kragen 11 days ago
        lumping vacuum tubes together with relays is a fundamental misunderstanding, and this misunderstanding has led you to make a statement that is false by about three or four orders of magnitude

        electromagnetic relays can switch at up to a few kilohertz, while some vacuum tubes can switch at up to a few gigahertz—billions of cycles per second—though most of them can only handle signals up in the megahertz. in world war ii, before the first computer¹, people were already using vacuum tubes to produce, amplify, and decode multi-gigahertz radio signals. typically such microwave tubes only have a bandwidth of an octave or less; they don't work down to dc with the same transition times, a problem that also afflicts transistors

        the ibm 709 https://en.wikipedia.org/wiki/IBM_709 was a vacuum-tube computer delivered in 01958 which was capable of 42000 arithmetic operations per second. this is literally ten thousand times faster than the '3–10 operations per second' you remember from the youtube videos you have occasionally seen. the 709 wasn't the fastest vacuum-tube computer; the enormous an/fsq-7 reached 75000 instructions per second. and, as i understand it, in the 709, the limiting factor wasn't the vacuum tubes, which could switch a great deal faster than that, but the core memory—the only part of the entire machine that used any transistors

        the an/fsq-7 contained sixty thousand tubes. i haven't been able to find a citation for the total number of tubes in the 709. http://www.bitsavers.org/pdf/brl/compSurvey_Mar1961/brlRepor... says its arithmetic unit contained 2000 tubes and 14500 diodes. the cathode heating filaments in the https://en.wikipedia.org/wiki/7AK7 tubes used in many contemporary computers were about six watts each, and the whole machine used about a hundred kilowatts, so it couldn't have had more than about fifteen or twenty thousand tubes in it; my guess is that it had most of that number

        a variety of schemes for computing at microwave frequencies were considered in the 01950s, but not adopted, in large part because of the absence of sufficiently large memories that could work at those speeds. the cycle time for core memory remained in the microsecond range from its invention until intel finally displaced it with sram in the 70s

        unless you are very young, you have probably seen a cathode-ray tube television set, perhaps even a cathode-ray tube computer monitor. you may be unaware that cathode-ray tubes are a kind of vacuum tube, and that ntsc broadcast is about 5 million pels per second, so that single vacuum tube is successfully processing a signal of a few megahertz—down to dc, since it can successfully display either a solid white screen or a solid black screen. knowing this, it should now be inconceivable to you that vacuum-tube computers could have been limited to 3–10 operations per second. but there's more; the monochrome version of the ntsc standard is from 01941, and not just the data transfer from the electron-gun signal onto the screen but also the radio-frequency demodulation of that signal was being done by vacuum tubes, and not in military or scientific equipment, but in mass-market home appliances!

        (525 lines by about 700 pels per line at 30 frames per second would give you 15 million pels per second, implying frequencies up to 7.5 megahertz, but actual ntsc signals are bandlimited to less than 6 megahertz, so the horizontal definition is quite a bit poorer than the vertical)

        tubes weren't the only game in town, either. the square-loop ferrite used for core memory was also capable of solid-state digital logic, which won't surprise you if you know how core memory works. it was in fact commonly used for digital logic throughout the 01960s, especially in the soviet union, usually in combination with semiconductor diodes, which had been in wide use for radio electronics since the 01920s, long before the transistor. (bose's original 60-gigahertz submillimeter radio experiments in 01895 used semiconductor diodes https://en.wikipedia.org/wiki/Jagadish_Chandra_Bose#Microwav... but it took quite a while to simplify the apparatus to the point that it completely displaced coherers and electrolytic detectors.) as i understand it, in such 'ferrite–diode' systems, the ferrite provided logical inversion, state (memory), and amplification, while diodes provided all the combinational logic, just as diodes did in some vacuum-tube computers like the lgp-30 (3800 additions per second on 113 vacuum tubes: a twentieth of the performance of the an/fsq-7 for a 500th of the number of tubes). what eventually killed ferrite–diode systems, as i understand it, was the same performance ceiling that killed core memory

        but many older atx power supplies still contain magnetic amplifiers, so-called 'magamps', which don't require square-loop ferrite. magamps were also used in the guidance systems for the wwii v-2 rocket

        other digital logic technologies that have been used include pneumatic, fluidic, and neon-tube logic, but all of these are much slower than vacuum tubes, transistors, and even the magnetic logic types described above

        computer architecture has vastly improved since the 01950s. if you were to build a computer out of 5200 vacuum tubes and 18000 diodes (like the 01958 univac ii, which also included 1200 transistors) you could surely implement a bit-parallel rv32iczmmul with dual-tube flip-flops for all the register bits and a very small instruction cache (say, 16 16-bit compressed instructions, 256 bits stored in 512 tubes) and run at several million instructions per second as long as you didn't have to touch core, instead of the 6000 that the univac ii did. but instruction caches hadn't been invented yet, univac ii was bit-serial and bcd, and the cdc 6600 was years in the future

        so we've gotten about three orders of magnitude speedup from architectural improvements, quite aside from switching elements going faster. but early transistors weren't any faster than contemporary vacuum tubes; in fact, most discrete transistors still can't handle microwave frequencies. the main thing that is responsible for the transition from kiloflops on the desktop² to teraflops is not transistors but miniaturization. in the 01930s, people were already miniaturizing vacuum tubes to 'acorn tubes' to get them to run successfully at gigahertz frequencies. transistors are more amenable to miniaturization than vacuum tubes or relays, it's true, but nanometer-sized electrostatic relays would only be about an order of magnitude slower

        as i mentioned above, wikipedia says that the univac i and a lot of other early vacuum tube computers used the not-at-all-miniaturized sylvania 7ak7 tube; as i read the datasheet at https://web.archive.org/web/20221114011216/http://www.nj7p.o... it has an input capacitance of 12 picofarads, passes about 35 milliamps when fully on, and turns all the way off when you bring the control grid to -11 volts. so if you had one tube like this driving the control grids of two other tubes, i think you'd get a transition time of about 8 nanoseconds (60 megahertz) if the tubes were the part of the circuit that limited the speed. but i don't understand how vacuum-tube logic circuits work, so i might be off by an order of magnitude here

        corrections on any of this would be most welcome

        ______

        ¹ possibly someone will post a correction to this based on the fact that there were things called 'computers' before the zuse z4, the johnniac, the pilot ace, and the eniac's transition to programmability. yes, there were, but those things were not the things we call 'computers' today, which are machines capable of turing-complete computation up to their memory limits. volume 2 of the first edition of the oxford english dictionary, published in 01893, contains a definition for 'computer', one which was later extended to all kinds of machines used for calculating; the copy i scanned is at https://archive.org/details/oed02arch/page/750/mode/1up?view.... but the things we call 'computers' today did not exist at that time or at the beginning of world war ii. zuse's z3 from 01941 was discovered to be in some theoretical sense turing-complete decades later

        ² or under it, in the case of the lgp-30, which was the desk

      • kevindamm 12 days ago
        ...and that's not all! With access to our modern network you effectively have Teraflops at your fingertips, possibly even Petaflops if you know what you're doing.
        • ForHackernews 12 days ago
          Thank god, because I can `npm install` in mere minutes and run two, possibly even three Electron apps concurrently.
          • FredPret 12 days ago
            He said petaflops, not exaflops
  • moffkalast 12 days ago
    > This program was written at Los Alamos Scientific Laboratory by Paul Stein and Mark Wells for the MANIAC I computer

    I see the names of computers have gone significantly downhill since the 60s.

    • kijin 12 days ago
      > Metropolis chose the name MANIAC in the hope of stopping the rash of silly acronyms for machine names [1]

      Looks like even MANIAC was deliberately restrained to make it sound less maniac than the names of computers that came before it. Can't beat Manchester Baby, Colossus, The WITCH, SWAC and MADDIDA.

      Being Los Alamos, they really should have named it ATOMIAC or something.

      [1] https://en.wikipedia.org/wiki/MANIAC_I

      • moffkalast 12 days ago
        ATOMIAC is a great option, but if you look at today's super computers it's all Sierra, Selene, Summit, etc. Lame ass corporate names.

        I'm at least glad my university lab named their main GPU server The Crushinator.

    • nxobject 11 days ago
      I miss adding "AC" to the end of computer names to emphasize that you indeed have an "automatic computer". ("MACbook Pro" doesn't count, I swear!)
    • tibbetts 12 days ago
      I dunno, the new Microsoft / OpenAI computer is named “Stargate,” that’s a pretty solid name.
  • CrociDB 12 days ago
    > The computer played three games. The first was played against itself. The second one was against a strong human player, who played without a queen. The human player won. In the third game, MANIAC I played against a laboratory assistant who had been taught the rules of chess in the preceding week specifically for the game. The computer won, marking the first time that a computer had beaten a human player in a chess-like game

    I think this paragraph is kinda funny.

    • tibbetts 12 days ago
      I was disappointed that the lab assistant is not named.
  • ryukoposting 12 days ago
    This looks like fun. I like chess variants that use smaller boards like this. They offer much of the same high-level strategic thinking, but a much shorter play time and a lower barrier to entry.

    My favorite one is Onitama, it's actually quite different from chess but the same intuitions apply.

    • jamager 11 days ago
      Onitama is brilliant. For me the best along with Hive and Santorini.
  • QuesnayJr 12 days ago
    If anyone wants to try playing this, it's one of the variants supported by Fairy Stockfish.
  • JoeDaDude 11 days ago
    Not to be nitpicky (well, maybe a little)...

    El Ajedrecista [1], a chess machine designed by Spanish engineer Leonardo Torres Quevedo in in 1912 and later improved in 1915, defeated chess Grandmaster Savielly Tartakower in 1951 [2], a few years earlier than the unnamed lab assistant who lost to Los Alamos Chess. This would make Savielly the first human player defeated in a chess-like game. The lab assistant loss must have occurred sometime after 1956.

    To be fair, El Ajedrecista played a very limited game, only capable of playing a white king and rook against a black king moved by a human opponent. Still, I think it qualifies as a "chess like game"

    As a matter of casual trivia, El Ajedrecista inspired Claude Shannon, the founder of information theory, to build his own chess machine [3], which he named Caissac (after the goddess of chess Caissa [4]) in 1950.

    [1]. https://www.chessprogramming.org/El_Ajedrecista

    [2]. https://en.wikipedia.org/wiki/El_Ajedrecista

    [3]. https://www.chess.com/article/view/the-man-who-built-the-che...

    [4]. https://en.wikipedia.org/wiki/Ca%C3%AFssa

  • redbell 11 days ago
    > The reduction of the board size and the number of pieces from standard chess was due to the very limited capacity of computers at the time.

    I wonder what the authors might have achieved having access to modern computers that are billions times faster than their machines.

    > The computer played three games. The first was played against itself. The second one was against a strong human player who played without a queen. The human player won.

    Again, I wonder what they would think seeing AlphaGo beating Lee Sedol [1],[2]

    1. https://en.m.wikipedia.org/wiki/AlphaGo_versus_Lee_Sedol

    2. https://youtu.be/WXuK6gekU1Y

  • polynomial 11 days ago
    Why did they lose the bishops rather than knights? Seems like if you were simplifying the game for early computers to learn, that's the move (remove the one piece that moves weird.)
    • kqr 11 days ago
      Bishops have a worst branching factor of 9, while knights have only 8. And this difference is even larger on average; a bishop at b2 can move to 7 places, while the knight on the same location can move to just 4.

      (This can be thought of dynamically as well. If a bishop moves one space closer to a corner, it loses three potential moves, but gains one move in the opposite corner. If a knight moves closer to a corner, it just loses moves, it never gains them.)

      • polynomial 6 days ago
        I like the knight on the edge analogy.

        Bishops control up to 13 squares, knights up to 8. The math maths.

        Unless you ask ChatGPT, which will tell you bishops do better in closed positions than knights (before going on to say (correctly) that bishops do better in open positions, without acknowledging or seeming to "notice" it is flatly contradicting itself.

  • OscarCunningham 12 days ago
    Has this been solved?
    • bongodongobob 12 days ago
      No, we only have endgames solved up to 7 pieces. 24 pieces is waaaay too big to solve.
      • tbrake 12 days ago
        24 pieces on an 8x8 is a nightmare, sure.

        However, other smaller chess versions (e.g. 5x5 w/ 20 pieces) are at least weakly solved and given the piece/pawn/move restrictions of this 6x6 variant, I wouldn't be surprised if a weak solution here is possible as well.

        • Sesse__ 12 days ago
          We can do a rough estimate.

          There are (_very_) roughly half as many squares in 6x6 as there are in 8x8. This means that for every piece you add, you can expect (again very roughly) a 50% bonus to your branching factor. Now look at the best tablebases currently available (Syzygy):

          Up to 5 pieces: 1GB Up to 6 pieces: 151GB Up to 7 pieces: 17277GB

          So, let's assume the factor of 140 per extra piece; of course this doesn't hold perfectly, but still:

          Up to 8 pieces: 2418TB Up to 9 pieces: 338PB Up to 10 pieces: 47320PB

          Now divide that last number by our 2^10 bonus. That leaves 46PB for 10-piece tablebases. We were asked for 24-piece. This is unlikely to go down well.

          Of course, there will be some small additional bonus for the fact that fewer positions are possible (easier to be in check), double-pawn-push doesn't apply, no en passant or castling to worry about, and fewer piece types. Still, it honestly doesn't look that good. :-)

          • thethirdone 12 days ago
            Your analysis would pretty similarly apply to 5x5 Gardner's chess which has been weakly solved. Simple tree search can be quite effective as the branching factor is cut down a significant amount (2x in the starting position). Gardner's chess was completely solved without any tablebases.

            It is only an argument against building a complete tablebase. Additionally because the extra space is reduced the high piece count version is likely the be much smaller than this would predict because pieces cannot overlap.

            Just looking at the piece positions without regard to the types shows a better than 2x relationship for tablebase size.

            (64 choose 7) / (64 choose 8) ~ 10% whereas (36 choose 7) / (36 choose 8) ~ 25%

            Additionally, because the tablebase would need to go past 18, the possible piece positions would actually shrink. To be clear, the tablebase would not shrink because of the piece types.

            • Sesse__ 12 days ago
              I'm not sure if you can call it “completely solved” if it's only weakly solved, but sure. It's a different goal (and my answer was a response to a “can't we just build a tablebase” comment).
          • tbrake 12 days ago
            I'm really only interested in weakly solved, i.e. "forced draw from starting position" like the 5x5 variant, not a perfect play tablebase from all possible positions.

            Given that 5x5 is (weakly) solved, even though your rough estimates would put its 20-piece 5x5 starting position outside the realm of possibility, I don't think you're discounting accurately how much a reduced board/piece/ruleset can affect those exploding permutations for 6x6 as well.

            • Sesse__ 12 days ago
              Well, it helps if you can meet-in-the-middle, roughly. And it helps a _lot_ to do a weak solve, i.e., you don't care about winning won positions, only showing a draw with perfect play.
          • zone411 12 days ago
            Note that the number of possible legal positions will likely be largest with fewer than 24 pieces. For regular chess, I've calculated that there are most legal positions for 29 pieces: https://github.com/lechmazur/ChessCounter?tab=readme-ov-file...
            • Sesse__ 11 days ago
              It doesn't really matter, since every position with 24 pieces can change into one with fewer. So you need the sum of all of those values.

              For a more sophisticated estimation of the number of legal positions, taking into account _true_ legality (a legal position must be reachable by a sequence of legal moves from the starting position, however dumb), see https://github.com/tromp/ChessPositionRanking. You can probably filter their sample by number of pieces to gain pretty accurate estimations for number of 29-piece positions if you wish.

              • zone411 10 days ago
                This matters in the context of your statement of "factor of 140 per extra piece," which doesn't hold when the number of pieces nears the maximum.
            • tromp 11 days ago
              > For regular chess, I've calculated that there are most legal positions for 29 pieces

              Improved counting shows the maximum to occur at 28 pieces [1] (also see the issue I filed on your github repo).

              [1] https://www.chess.com/forum/view/general/on-the-number-of-ch...

              • zone411 10 days ago
                I don't have time to go through the thread at this time but I see that in the first post of this thread you just confirmed the number I calculated (8.7E+45) instead of your earlier estimate of 4.5E+46?
                • tromp 9 days ago
                  Your page only mentions

                  > Upper bound estimate of possible legal chess position (counts en passant, castling, sides to move, allows underpromotions): 8.7E+45.

                  But how exactly is this number determined? Is this a sampling based estimate?

                  My 8726713169886222032347729969256422370854716254 is an exact upperbound on the number of so-called urpositions, and is not sampling based.

                  In any case it will be easier to continue this discussion at https://github.com/lechmazur/ChessCounter/issues/1

        • bongodongobob 12 days ago
          I would be. At a minimum you've got 16 moves the first couple moves. There's no way.
  • once_inc 11 days ago
    How did I go through 4 decades of life without knowing about the en-passant capture? I've never heard of it before...
    • vasco 11 days ago
      It's a common and fun way to make a noob angry that you're breaking the rules before you google it in front of them. I've had tough times "proving" it before Google though!
      • alberto_ol 11 days ago
        When I was in high schhol I won a game in a tornament because I hoped thet my opponent didn't know the rule. Otherwise it would have been a draw.
        • vasco 11 days ago
          Also had that happen in a school tournamen when I was young! Nowadays, over the board I try to force it in a first game with a new person to see if they know it and if they don't take my piece, I'll bring it up to prevent it being a problem later :)
    • QuesnayJr 11 days ago
      While a part of the standard chess rules for over a century, it's common for people who learn to play with their family as kids don't learn it. I didn't learn it, either, and only found out about it when I started playing chess programs online.

      This is common enough that "Google en passant" is a meme in some circles.

  • ChilledTonic 12 days ago
    I'm actually very interested in picking this up as a varietal to regularly play; the smaller chessboard size and reduced pieces would (theoretically) result in faster, pickup play.
    • prometheus76 12 days ago
      5 X 5 is also really fun. It's the queen side basically, with a row of pawns in front. So there's only one row of open spaces between the two sides. It really stretches my brain to play it, but has helped me get better at calculation for sure.
  • listenlight 11 days ago
    Has anyone here played losers chess? Some of the top players on ICC (chessclub.com) back in the day played losers…
  • michaelcampbell 12 days ago
    Looks interesting as a total looker-on of chess. Does the smaller board and reduced rules make the game any easier?
    • ryukoposting 12 days ago
      It makes the game go a lot faster, that's for sure. It lowers the barrier to entry by a lot, while still requiring the same modes of thinking.

      There are a ton of chess-like games out there that have small boards and play times of 10-20 minutes. IMO as an everyday thing to do with friends, they're a lot more fun than chess.

      • mxwsn 12 days ago
        Any suggestions or favorites?
        • ryukoposting 12 days ago
          Onitama for sure: https://en.wikipedia.org/wiki/Onitama

          It's played on a 5x5 board. You get two cards that tell you what moves you can make. Either card can be used with any of your pieces. Once you've used a card, it will end up in your opponent's hand a couple turns later.

          Every time you play, it's a little bit different because the cards are different. In addition to the chess-like thinking, there's a lot of strategy in learning when to use particular cards. Some cards are really powerful, and it's worth keeping them in your hand until the right moment.

          IME a single round of Onitama usually lasts 10-15 minutes. The shortest game I've played was probably about 6 minutes, and the longest was 30.

          It's simple enough that you can just pull it out and play with anyone, even if they've never seen it before. Pretty much everything you need to know to play Onitama is printed on the cards sitting right in front of you. It's pretty easy to pick up.

          It's also nice that you only have to clean up 10 pieces after a game instead of 32.

    • umanwizard 12 days ago
      Yes, that reduces the search space
  • abound 12 days ago
    Not to be confused with Atomic Chess: https://en.m.wikipedia.org/wiki/Atomic_chess
  • Y_Y 12 days ago
    1. Feynman to U-235
  • AdmiralAsshat 12 days ago
    So judging from the omission of the bishop and the queen, the computer really hated calculating diagonals, huh? I would've assumed that calculating the knight's movements would've been the bigger headache.
    • InitialLastName 12 days ago
      The queen is there. It was probably a more interesting game to keep the knights for their moveset rather than keep the bishops (given the need to reduce to 6x6).

      Between the ~half-sized board, the intrinsic limitations in where they can go, and the reduced pawn movement, bishops would be very clumsy if left in.

    • Sharlin 12 days ago
      I don’t think there’s anything difficult in calculating a knight’s moves. On the other hand rooks, bishops, and queens generate many moves at a time, making the game tree much wider, which might make a difference, or it might not. But anyway, as noted by the sibling, queens were there (except in the second game, in which Kruskal played without and the computer played with, to even the odds), and the omission of bishops in particular had nothing to do with evaluation difficulty per se.
      • Sesse__ 12 days ago
        Humans are probably worse with knights than any other piece, relative to computers. Queen is a possible second; queen endgames are super-complex for humans since there are so many options and very few of them can be easily pruned. (I guess you can say that humans are bad with knights and computers are good with queens, especially endgame queens.)

        Diagonal sliders do create lots of headaches for bitboards, but those were not used for chess before 1967 or so, i.e., more than a decade later. Generally one uses magic bitboards today (basically a perfect hashing technique to extract the bits of the diagonals a piece can attack into a neat little linear bit line), but they depend on fast multiplier and a fair amount of RAM for LUTs, which would of course not have been available on the MANIAC. I would assume that MANIAC just looped over all possible moves in the move generation phase.

        • Sharlin 12 days ago
          Thanks, interesting!
    • silisili 12 days ago
      They retained the queen, it was just the human played without one against the computer for some reason. Probably to give MANIAC a fighting chance.
  • cyberax 12 days ago
    I am disappointed because it doesn't involve nuclear bombs.
    • jameshart 11 days ago
      Hmm… something like:

      Each player starts with only a king on the board.

      Player turns consist of either:

      1) adding a pawn to the board

      2) moving the king

      Or 3) launching one, some, or all of their pawns already on the board.

      Launching a pawn involves removing the pawn from the board, and nominating a square where the pawn will land. If there is an opposing piece on that square at the end of the opponent’s next turn, that piece is removed from the board.

      If your king is removed you lose.

    • hammock 12 days ago
      I was thinking spontaneous decay - at a random time every so often, one of your pieces may "decay" and be demoted, from a major piece to a minor one, or from a minor one to a pawn :)
    • kevindamm 12 days ago
      yeah, I was expecting chess piece variants like Fairy Chess.. but there still can be! But do we add wind and fallout? Or make it a MAD type of thing where the meta-game is a kind of Prisoner's Dilemma? Either way, we could call it Manhattan and give it a go..
  • csmeyer 12 days ago
    "Pawns may not be promoted to bishops;"

    ??????????

    For real, I would absolutely love to know why that constraint was needed. Does anyone have info on why?

    • billforsternz 12 days ago
      I don't know why you've put lots of question marks, surely this is completely understandable. The whole point of this was that real chess was too big and complex, hence this dramatically smaller simpler version. There's no bishops in the start position so there's absolutely no way they'd bother implementing the bishop underpromotion corner case.

      Amusingly enough Rybka, the top engine for many years comparatively recently didn't implement bishop underpromotion either. For a much different reason. The programmer chose a not so great low level bit representation convention that forbid representing this case. He refused to fix the error because it would be too much work for the more-or-less non-existent situations where promoting to a bishop is the best move.

      • teo_zero 12 days ago
        Interesting. Does anybody have any example where a promotion to bishop would be preferable to Queen?
        • Sesse__ 12 days ago
          Basically anything where a bishop would win, but a queen would put the other side in stalemate.

          For an extreme example, see https://en.wikipedia.org/wiki/Babson_task.

        • billforsternz 12 days ago
          I'm at work so I'm not going to try and do some experiments, but my guess at relative frequency of (under)promotion being the best move (Queen = 1) is something like;

          Knight 1e-3 Rook 1e-5 Bishop 1e-7

          A complicating factor in measuring this is that often a rook serves just as well as a queen, the opponent is going to capture immediately anyway, so players sometimes underpromote to a rook just for the fun factor. My number is more meant to represent cases where underpromotion really is the best move. Knight underpromotion is by far the most important, because a knight can do things a Queen can't, most typically give check to keep a combination going without giving the opponent an opportunity to play an active move. Rook and Bishop underpromotions can only be advantageous when they simultaneously avoid stalemate and keep a winning advantage. This is much more likely with a rook, for example in the famous Saavedra position, perhaps the most famous chess problem of all (it featured as decoration on our wedding cake :).

        • seanhunter 11 days ago
          It is extremely rare. I saw an example involving Fabiano Caruana(current world #2)[1] and he was visibly excited in the post-match interview and said that's the first time in his whole career (which is thousands and possibly tens of thousands of games especially if you consider online etc) that he had done it.

          Edit to add: Here's the game. It was Caruana vs Robson 2021 and occurred in the 7th round of the USCF championship. Fabi's final move was to promote to bishop at which point Ray Robson resigned https://lichess.org/study/embed/iLDop9iy/wCdsPJJI He did it for style points- a queen would have been just fine in that position.

          [1] https://youtu.be/Bz-V5wms0pk

        • rdlw 12 days ago
          An example from a real GM game: https://youtu.be/CBoS9hSM6oM
    • abricq 12 days ago
      This rule is required because "as in chess", a pawn may promote to bishop.

      If you only remove the bishops from the starting position, then the rules of chess still allow bishops to come in later in the game (through a promotion).

      I am assuming they did not have enough memory to hold the rules of bishops, and therefore decided to completely remove them from the set of possible pieces. They allow rooks and knights promotion because these pieces are in the set of initial pieces: their rules are already implemented.

      • csmeyer 12 days ago
        Oh duh... well my question was not insightful but yes your answer makes sense, there are no bishops on the board lol
    • seanhunter 11 days ago
      Because they modelled the game with no bishops. You don't start with bishops and you also can't promote to a bishop.