While researching / reading up / debugging, you stumble upon something interesting. Upon looking into that, yet another subject catches your attention.
You know how this goes. So... (see title). Bonus questions: what intermediate steps did you pass along the way? What stuck in your mind the most?
Linux proved interesting enough that I kept finding all sorts of cool new rabbit holes to go down - shell scripting, filesystems, Python, databases. It was side-quests within side-quests! Plus, having kicked my gaming habit, I had plenty of time to explore these.
Anyway, to cut a long story short, that was 23 years ago. I ended up getting a career in tech, relocated, got married, had kids, lived the American Dream... The "life" rabbit hole kind of got in the way of my plans, so I can't wait to finally get back on track and play GTA III on a decent box.
It got me to learn C, graphics programming, operating systems, networks and firewall, literally everything I wanted to do required a couple of days deep in arch linux wiki learning about all kinds of inter-connected systems.
The real world only has pain and suffering. Endless trials and never a payout.
Games on the other hand and very detailed and have a well defined path to success.
I need to get off my ass and start working towards better things in real life in order to (potentially) better my situation e.g. new job, better hobbies, etc..
However, I have always had issues with gaming on and off throughout my life (perhaps a lot more on than off). I seriously think that a lot of my issues with gaming is that games are preferential to life in many regards. In a game, I know if I work hard and follow the steps/guides/quests, then I will be rewarded. Goals are obtainable in that if I fail to achieve them it is my fault -- because I did something incorrectly.
Sadly, when I take breaks from gaming, I am not a productivity machine. I just find something else to waste the time with.
In the back of my mind, I want to believe that if I work hard and better my situation, then will finally be rewarded. But I have worked hard to get where I am, and I am still awaiting the reward, so to speak.
So, I think a part of my brain has taken the shortcut to destroy my motivation because I know that Sisyphus isn't the only one rolling the boulder up the hill thinking, "maybe this time will be different?"
The areas I want to improve in my life mainly pertain to my career. I believe it is important to note that I am not a Type-A, career focused, go-getter kind of person.
I am a developer like mainly others on this site. I am also quite unfulfilled and unsatisfied in my current position. I am grateful to be employed, but when I describe my position to others, I am generally met with a response like, "Run! Now, and never look back."
So, in order for me to get out of my current position, I am going to have to put in a lot of work in my free time. That's not necessarily a bad thing, of course. However, say I land a new job due to my efforts. I seriously do not think it will be fulfilling nor increase my happiness in the grand scheme of life. I seriously think where ever I end up will be a different round of the same "game" with different players, so to speak.
I do not feel that my motivation to change comes from some internal burning passion, but rather some obligatory "should" feeling i.e. I should find a better job, I should study <insert topic>, I should work out more, etc..
Though, you are right -- gaming has no bearing on my real life. The rewards from any game are, more or less, fruitless. Though, I guess my point is that real life is pretty damn fruitless too. If anything life just feels like chasing a carrot hanging from a stick. I have lived my entire life with this concept of an ever moving delayed gratification. There is no end to the real life grind, no clear guaranteed steps on how to progress, no checkpoints, no redos, etc.. Seriously, what am I missing? What I am grinding for?
Well designed video games can be hard or even harder than real life but they give you constant endorphins or dopamine hits to keep you going.
And you can take a break from video games without any negative repercussions.
Still, it sounds like you may be in a bad place, and my heart goes out to you. Hang in there. Life doesn’t always get better, but seasons change, and this too will pass.
Same here.
Life happened.
What I told my dad would never happen (not using Linux From Scratch or Gentoo - like how can he use Windows and like just get things done instead of digging deep and solving problems all the time?) did happen.
I now use Debian derived but more up-to-date and easy to use for mainstream people distros (like shudder Ubuntu) and they're relegated to server and TV duty and I hope nothing breaks when I upgrade from LTS to LTS version coz "I don't have time for this!" and I play a few games here and there on my Windows laptop.
But I think it's important to have gone through the "rabbit hole" in the middle. All the digging and understanding I did I still do all the time at work. I just no longer spend the other half of my life on it. I spend it digging into other things.
I _think_ the first Linux distro I ever installed was Slackware, but ashamedly, I can't remember. In my defense, I would've been about 10 years old at the time. I played around with a ton of others (I'm fairly certain I tried Debian Woody), but settled on Gentoo, because of course. Forget just tying up the phone line for dial-up, I tied up both of the family computers with distcc.
After a career break wherein I did relatively little with computers for a decade, I got back into Linux and quickly realized I did not care about -funroll-loops. I've been running Debian non-stop since Jessie, on everything from repurposed laptops, to ancient tower servers, to slightly-less-ancient racks.
> But I think it's important to have gone through the "rabbit hole" in the middle.
This right here. There is an endless stream of "how do I learn Linux?" questions on Reddit, and the answers are always some variation of "read this book," "take this course," etc. Perhaps there is value there, but I learned it by trying to do stuff. Like getting an HP PSC 2610 to talk to hplip and CUPS over LAN. Or getting a Chaintech AV-710's (an obscure sound card that happened to use an excellent DAC for 2-channel output) to work under ALSA. Doing these kinds of things forced you to read man pages, forums, newsgroups, etc. And when you succeeded, you could write up a HOWTO, and the three other people in the world who also needed this particular combination would give thanks.
(Digression: did you know libpng, the one everyone uses, is not supposed to be an optimized production library—rather it's a reference implementation? It's almost completely unoptimized, no really, take a look, anywhere in the codebase. Critical hot loops are 15 year old C that doesn't autovectorize. I easily got a 200% speedup with a 30-line patch, on something I cared about (their decoding of 1-bit bilevel to RGBA). I'm using that modified libpng right now. I know of nowhere to submit this patch. Why the heck is everyone using libpng?)
The worst offender (so far) is the JBIG2 format (several major libraries, including jbig2dec), a very popular format that gets EXTREMELY high compression ratios on bilevel images of types typical to scanned pdfs. But: it's also a format that's pretty slow to decompress—not something you want in a UI loop, like a PDF reader is! And, there's no way around that—if you look at the hot loop, which is arithmetic coding, it's a mess of highly branchy code that's purely serial and cannot be thread- nor SIMD- parallelized. (Standardized in 2000, so it wasn't an obvious downside then). I want to try to deep-dive into this one (as best as my limited skill allows), but I think it's unlikely there's any low-hanging optimization fruit, like there's so much of in libpng. It's all wrong that everyone's using this slow, non-optimizable compression format in PDF's today, but, no one really cares. Everyone's doing things wrong and there is no way to stop them.
Another observation: lots of people create PDF's at print-quality pixel density that's useless for screens, and greatly increases rendering latency. Does JBIG2 support interlacing or progressive decoding, to sidestep this challenge? Of course it doesn't.
Everyone's doing PDF things wrong and there is no way under the blue sky to make them stop.
Looking at the jbig2dec code, there appears to be some room for improvement. If my observations are correct, each segment has its own arithmetic decoder state, and thus can be decoded in its own thread. The main reader loop[1] is basically a state machine which attempts to load each segment in sequence[2], but it should not need to. The file has segment headers which contains the segments offsets and sizes. It should be possible to first decode the header and populate the segment headers, then spawn N-threads to decode N-segments in parallel. Obviously, you don't want the threads competing for the file resource, so you could load each segment into its own buffer first, or mmap the whole file into memory.
[1]:https://github.com/ArtifexSoftware/jbig2dec/blob/master/jbig...
[2]:https://github.com/ArtifexSoftware/jbig2dec/blob/master/jbig...
Yeah, but real-world PDF JBIG2's seem to usually have one segment! One of the first things I checked—they wouldn't have made it that easy, the world's too cruel.
It's sort of a generic problem with compression formats—lots of files could easily be multiple segments that decompress in parallel, but aren't—if people don't encode them in multiple segments, you can't decompress them in multiple segments. Most formats support something like that in the spec, but most tools either don't implement that, or don't have it as the default.
e.g. https://news.ycombinator.com/item?id=33238283 ("pigz: A parallel implementation of gzip for multi-core machines" —fully compatible the gzip format and with gzip(1)! No one uses it).
I guess the only way to tackle it would be to target the popular software or libraries for producing PDFs to begin with and try to upstream parallel encoding into them.
Or is it possible to "convert" existing PDFs from single-segment to multi-segment PDFs, to make for faster reading on existing software?
The downside is that any PDF conversion is a long-running batch job, one that probably shouldn't be part of any UX sequence—it's way too slow.
Emacs' PDF reader does something like this: when it loads a pdf, its default behavior is to start a background script that converts every page into a PNG, which decodes much more quickly than typical PDF formats. (You can start reading the PDF right away, and the end of the conversion, it becomes more responsive). I think it's a questionable design choice: it's a high-CPU task during a UI interaction, and potentially a long-running one, for a large PDF. (This is why I was profiling libpng, incidentally).
https://www.gnu.org/software/emacs/manual/html_node/emacs/Do...
Funny you say that, https://en.wikipedia.org/wiki/JBIG2#Character_substitution_e...
> Another observation: lots of people create PDF's at print-quality pixel density that's useless for screens, and greatly increases rendering latency.
Is this relevant to text in the PDF? I would assume text is vectorized, meaning resolution is not relevant until you _actually_ print it?
Or is it just relevant to rasterized content like embedded images?
[0] https://en.wikipedia.org/wiki/JBIG2
[1] https://en.wikipedia.org/wiki/Fax#Modified_Modified_READ
(You can examine this stuff with pdfimages(1)—or just rg -a for strings like /JBIG2Decode or /CCITTFaxDecode and poke around).
I'd assume that the photo-type image decoder is optimized, right? If so, how does the optimized photo-type decoder compare to the apparently unoptimizable JBIG2 decoder?
How about the folks listed as "Authors":
* http://www.libpng.org/pub/png/libpng.html
> Why the heck is everyone using libpng?
What is the alternative(s)?
To add/remove: mutool merge -h
To split PDF pages: mutool poster -h
I made a script here that I use frequently for scanned documents: https://github.com/chapmanjacobd/computer/blob/main/bin/pdf_...
The thing about diy'ing audio (primarily speakers but also amps, DACs etc) is that you can get top of the line performance for a fraction of the market price. A $50,000 speaker setup that would bring tears to your eyes could be made for perhaps $5000. A DIY $500 kit can perform similar to a $2-3000 set of speakers. Open source amps with gerber files on github are amazing.
The biggest reason it's so easy to get amazing value is because that $600 speaker only has $150 of materials. Upgrading its $25 woofer to a $80 one would help a lot, but no company would do that and not sell it now for $1000 if they could.
However the biggest allure for me is not beating commercial systems on cost, but making what I want. A small speaker with deep base? Easy. Speakers with quasi-active noise cancellation behind them? Sure, why not. Speakers that'll make the most overpowered/fancy beach-boombox sound like a crappy toy? Simple.
The only limit is your imagination and time/money.
I'd very much recommend diyaudio.com, but be warned, parts of this field are mature while others are still in effective infancy. Also, being an engineer (electrical/mechanical) helps a lot, there's a ton of signals processing and electrical/mech oscillation.
Additionally, sites like diyaudio.com are better when you want a specific thing built and are looking to learn more about techniques, new parts etc.
I will admit that stuff like "speakers with quasi-active noise cancellation behind them" sounds intriguing. That's probably a good reason to get into this rabbit hole!
Absolutely. Don't forget, these guys are: a. Humans, and b. Operating for a company to make a profit. When you're DIYing you're (generally) not concerned about the latter part at all.
There's a few more reasons why DIY is so capable:
1. High quality drivers are available to purchase. There are companies like Tymphany/SB Acoustics etc that are OEM/ODM manufacturers selling to the big names. You can get the same/very similar models from parts express and other sites.
2. A lot of the engineering principles are well understood, public science. In fact many experts hang out on websites like DIYaudio.com. They're human. You can see their workings, opinions, doubts etc up close.
3. Some speakers like the Dutch&Dutch 8c's started their lives on forums like diyaudio. Which is to say, they went from DIY level to "well-reviewed" level in a manner that's quite clear/transparent to anyone familiar with the forum/DIY. No "hidden" black magic involved.
4. You have a lot of amazing designers on these forums putting their designs out for free. Jeff Bagby, Paul Carmody, Troels Gravesen, Perry Marshall etc. Check out Perry's comment on his speaker below. Btw, he's a professional designer having worked across a number of audio & car companies designing AV systems.
Now, if you want to design your own speakers and not use an existing model, yes you'll need to learn a lot. But it's very much doable. It may take time/money/effort, but beating a top of the line system for a fraction of the (material, not labour) cost is possible and has happened.
[0] - https://www.diyaudio.com/community/threads/ultimate-open-baf...
Latest at https://fourays.lon.dev
I've just got to the point where I think I know what the module is going to be, but last night found out that PCB manufacture puts additional constraints on the PCB design, so I have to go back and re-do a lot of it, including probably dropping some features to make it simpler. The learning never ends.
https://www.youtube.com/@HexiBase in case you're having trouble finding it with that spelling.
I would now put learning CAD in the same category of mandatory life skill as learning to code. The ability to translate what you see in your mind to something that can be repeatably fabricated is an incredible power move, akin to learning how to communicate complex ideas with empathetic language.
My advice is to start by following this tutorial step-by-step. It's a 90 minute video that took me ten days to get through. Step two is to take an existing project and change it in a significant way. Step three is to create something from scratch which solves a problem that you have.
https://www.youtube.com/watch?v=mK60ROb2RKI
However, I do have two nits. The first is the overwhelming focus on pricing model as a feature. I get it; SaaS is frustrating AF. However, I also understand that it's very difficult to build a functioning business around tooling that is free. There's a huge number of people who just can't understand why we don't have free Fusion/Z-Brush equivalents. It's easy to fix... just set up a $5-10M/year donation schedule to a group of people currently working for Autodesk and you could definitely have an OSS competitor to Fusion in a year or two.
In reality, people using powerful tools are used to paying a bit of money for those tools, and I honestly feel like that's how it should be. The people screaming the loudest for free-as-in-beer CAD that doesn't suck are also likely the worst customers that you wish you didn't have. Anyone who has ever noticed that non-profit clients always want to argue about billing the most will be nodding.
Second nit is that as awesome as Plasticity is, it's really a modelling tool (like Z-Brush) that is influenced by CAD, not the other way around. And I believe that there's a huge market segment for this! Game asset creators come to mind.
But there are huge swathes of workflow functionality that Fusion nails which just don't seem to be present in Plasticity. There's no component hierarchy, no timeline. The whole relationship between sketches and operations is lost in favour of just slicing through stuff... which is cool until you need to change your primary enclosure dimensions and expect every aspect of your design to re-calculate and adapt.
There's more to "parametric" than being able to change parameters on a tool. I try to describe to friends that in a tool like Fusion, the geometry itself is a parameter, the operations are like lambda functions, and the timeline can be rolled backwards and forwards like git commits.
When you have the lightbulb moment for all of this, it's really hard not to be annoyed when people attempt to shame you for not using FreeCAD, as is happening elsewhere in this thread.
[0] https://resources.sw.siemens.com/en-US/download-solid-edge-c...
These are often ambitious hobbyists who have quite some free time for their hobby, but little money (also since they don't earn money by using the CAD program).
The most rational choice would in my opinion to sell very cheap licenses without any support to these people: you don't loose any money because they wouldn't be customers for the expensive licenses anyway; on the other hand, because of some of these people's devotion, it makes the CAD software a possibly better choice for many companies because of the existence of more people to hire who know the software quite well.
Learning G code allows you to start using CNC machines - even a simple 2 axis plasma cutter can do interesting things. 3 axis machine centers can make things that are quite remarkable. I am now stepping into 5 axis.
Here I wonder the same thing. Not that everything joyful must be productive. But if there was a way to apply this to something that was neat in the real world I think I’d be far more motivated to learn the skill. And enjoy it more.
However, I am going to gently push back by pointing out that you're not connecting the dots between knowing how to use CAD to create solutions to problems, and having cheap 3D printers available that can make those solutions real.
In other words, your mistake might be looking externally for what you should be making. It's not so much a failure of imagination but not training your brain to make the possibility of creating objects one of the first steps on the path to problem solving. Perhaps a good analogy is how people go from asking GPT-4 things they've heard other people try to making asking GPT-4 about everything as normal as brushing your teeth.
So like, as much as it's awesome that I could realize I can print my own reels (for pick and place) from an STL off Thingiverse, my main use of my 3D printer at this point is to print off plastic prototypes of circuit boards and custom enclosures that I'm working on. Not only does this allow me to verify clearance (I actually saved myself five digits and months of pain recently by realizing that the 1/4" audio jacks would not allow my board to be inserted as designed) but it gives me something I can put in people's hands. I've found that, over many years, you can describe things to people and they will nod like they get it, and then when I put the real thing in their hands, they say something roughly like, "oh, this is what you meant". Which I used to find frustrating, and now I just accept it.
Right now, I'm working with the company in China that makes hard shell cases for basically every consumer product. They are sending me revisions of the insert that will hold everything safely. I print them off and then send photos and measurements back of how everything fits (or doesn't) which completely avoids the expensive and slow process of them making a mold, sending me a sample and me testing it. I've literally saved months and thousands doing this. It's awesome.
Similarly, you might have heard that injection molding is incredibly expensive to get started with and that there are fussy design rules you must follow. Well, engineers have recently clued in to the realization that we can essentially 3D print the molds, saving thousands and many lost weeks. Right now there's this crazy arbitrage where about 90% of product designers don't appear to realize that this is a thing, yet.
I could go on and on. The only takeaway is that as you normalize CAD and 3D printing as a go to tool the same way you probably think screwdrivers are pretty normal, you realize that you have more things you need a 3D printer for than things you need a screwdriver for. And that escalation can be really fast.
Addendum 1: Also, remember that it's not just 3D printing. Creating photo-realistic renders of something that doesn't exist yet can save the day. But there's also subtractive processes like CNC which is in some ways even more useful than additive processes like 3D printing. There's a Kickstarter right now for Carvera Air that a lot of folks should get in on.
Addendum 2: One of my very favourite theoretical use-cases for 3D printing is printing prosthetic limbs for animals. I say theoretical because I've never done it personally... but I intend to. I'm a total sucker for this concept and I want to have time to get involved someday. Lots of videos on YouTube, like https://www.youtube.com/watch?v=dP3Kizf-Zqg and https://www.youtube.com/watch?v=EynjYK45dyg and https://www.youtube.com/watch?v=sdFtMRko2GU
However, I would recommend the open source https://solvespace.com! It hits a sweet spot between features vs complexity/learning effort. (And as a programmer I dig the terminal aesthetics)
I'm now looking at going down the Blender / render rabbit hole as Fusion can only get you so far.
What sort of problems do you hope Blender can address that aren't tackled by Fusion?
In a similar vein, I also recommend taking a long look at game development engines, as they are in the same category of tool that have many uses beyond making games:
- quick UI mockups - environment walk-throughs - product demos - VR environments - best way IMO to teach coding to kids
I use and generally like Unity, although if I was starting over today, I'd be taking a good look at Godot.
TL;DR if you need to do interactive renders/movies a gamedev engine might actually be more generally useful to a coder than Blender.
I felt that Fusion was limited with the textures and materials available for rendering. I think Fusion is good enough for internal purposes but I want better looking renders for external uses.
I've worked on scrappy product teams that lacked a dedicated industrial designer. Usually, the mechanical engineer on the team would come up with a design and I'd come up with simple renderings for marketing docs, show investors, etc.
Most of the industrial designers I follow on linkedin all use Blender, I hadn't even considered Unity, I'll check it out.
Curious how you best learned how to communicate complex ideas with empathetic language?
(That's not a beginner-level project, to be clear)
example: https://www.youtube.com/watch?v=dejend_kx94
I'm a decently able self-taught CAD user now, of the level where I can reasonably quickly pick up a new piece of software. And yet... I've lost count of the number of times I've reinstalled FreeCAD thinking "this time it will be different"... and then quickly removed it again. Compared to anything reasonable it's just an awful hot mess to try and figure out, with huge quirks, a weird interface, and unhelpful error messages.
Given the reasonable pricing, I'm interested to try Plasticity, although it's not strictly CAD in the sense of Fusion360/Solidworks/etc - it's currently more of a modelling program. It's also doesn't have the parametric + history features that are really valuable in other products.
The truth is, we're crying out for a decent open-source CAD program. Everything currently available (FreeCAD, OpenSCAD, SolveSpace, CadQuery, etc.) has huge usability and/or feature deficits compared to the commercial offerings.
> I tried and tried and tried to get into [FreeCAD]. It promises so much, but there are two fatal flaws in my opinion. First, the UI is a nightmare. I have no idea which "workbench" I am supposed to be using, and there are so many similar choices available, each with subtly different tools and ... I gave up trying to make sense of it. Secondly, even when following tutorials to get some basic modelling done, I found lack of sensible keyboard control and having to click almost everything a real distraction. Not a good experience.
It's an absolute nightmare to use. Really the worst UX I've ever seen.
I'd love to use something like KiCAD is in the EDA world.
I highly expect, at any moment, Audodesk to pull the rug from under the hobbyist licence and I'll not be able to use it anymore.
While I feel kinda 'dirty' when I use it, I have been using it for maybe 8 years and the thing is a masterpiece. It's like a third arm for me.
Once your brain knows the whole constraints workflow it is so natural. I can very quickly model up pretty much anything I can think of to make.
Time spent thinking "I'm sure it's just me" in FreeCAD is time you will never get back.
Licensing is a thing with FreeCAD but last time I looked it was free (as in beer) for hobbyists. The grey zone is where you turn your hobby into a small scale business.
I started out when I got a new phone, and didn't know what to do with the old one. One of the ideas I had was a homeserver. Turns out it's not trivial to run Docker even on rooted Android phones, and you need a lot of kernel patching, tweaking and more and it still had issues after that.
The next step was when I figured out I could install postmarketOS on it, and I managed to flash it, SSH into it and set up Nextcloud for our photos and unbound as a recursive DNS for my home network. I thoroughly recommend postmarketOS, and the contributors are amazing as well.
I was however running out of storage, so I ordered an 256GB SD card, and set up mergerFS between it and local storage, worked fine.
After some time however, I got paranoid about having and old device with a LiPo battery constantly being charged in my home, so I decided to get a mini PC from Aliexpress and chucked a 2TB SSD in.
In the meantime, I discovered Immich, which turned out to be much better for photos than Nextcloud, and fell in love with it.
The final thing I added was a miniDLNA service to play my local movies and shows on my LG TV without having to bother with Plex/Jellyfin and reencode anything. Unfortunately, it kept disappearing after roughly 2 days of operation, so I just added a cron job to restart it at 5 AM.
For the time being, I don't need anything more and am turning my attention to other things.
The challenge is that in an aquarium, plants grow reasonably fast but not fast enough to sell regularly for a decent income. You need ways to produce more plants faster, more reliably, and without taking up too much space. That’s where tissue culturing comes in.
It has reaaaally sucked me in. I’m culturing everything I can find. I’m also propagating aquatic plants through more typical means, and that’s fun too. I’m out of space though.
Tissue culturing is a really fascinating science and practice. I love keeping track of the media recipes, results, growth rates, etc. I’m too early to have had meaningful results, but I look forward to tracking those as well.
Plants have been slightly more challenging, but not as much as I expected. If you aren’t optimizing for profits to keep a facility open, mediocre results are still awesome and you’ve got plenty of time to keep experimenting. I guess it’s much the same with mushrooms. If you don’t get 5 pounds on your first flush, it’s still great fun. The beauty of mycelium is that turn around times are an order of magnitude shorter. Tissue cultures are very, very slow.
I've always really wanted to get into this but really don't know where to start.
Also, "Murashige and Skoog" is such a silly name; I love it.
I have a background in absolutely nothing. Anyone can do this stuff.
I mentioned in another comment, the book “Plants from Test Tubes: An Introduction to Micro-Propagation” was incredibly helpful to me. You can get a cursory understanding of things from YouTube or similar, but the book does a great job of explaining what you’re actually doing, how, and why.
A good place to start is prepping some media and containers, collecting some plant tissues, sterilizing it, and dropping it into the media in the container! I know each step here is a subject within itself, but it really is this simple when you zoom out a bit. If you aren’t sure about how to make MS media, start with pre-made options from a company like Plant Cell Technologies. You can use glass jars with autoclaveable plastic lids as the containers. You can use a pressure cooker as your autoclave. Collecting the plant material can be simple or extremely difficult (collecting meristem material can be excruciating if you haven’t worked under a microscope before), and it’s fine to start simple (just use a piece of a leaf). You’ll want a flow hood or still air box, and you can make these for peanuts or buy small solutions for pretty reasonable prices.
As someone once told me: There’s nothin to do but to do it
> You’ll want a flow hood or still air box
This was what I was going to ask next. I live in the middle of the Pacific ocean so shipping large things here is prohibitively expensive (for a hobby project) so I guess I'll have to make one. I'll do some googling.
For example, I want to culture some plant. How do I get the best tissue from the plant and which media and hormones seem to work best? Should I use agar, gellan, multiply in a temporary immersion bioreactor, are there any special deflasking notes, etc. The information is out there for a ton of species!
The first attempt was over a decade ago, and I never really quite got it.
The second attempt was about 6 years ago, and while the fundamentals clicked and I improved dramatically as a musician/composer, I still only ever learned some small portion of the basics.
This time, I'm taking it a few steps further. I'm going back to the basics first to recheck my existing understanding of things, and really trying to take my time and understand each new concept before moving on.
I'm currently working my way through this online textbook: https://musictheory.pugetsound.edu/mt21c/frontmatter.html
I've also picked up a simple little budget keyboard (a 61-key Casio CT-S1) that does just what I need at a great price.
Having a lot of fun so far and learning a ton! :)
I realized 15 years after finishing music school how chord progressions would speed up my learning new songs on piano and learn playing jazz. So Im looking for any kind of literature that would improve my knowledge there.
Whenever I get spammed, I check the email headers. Turns out they were using an email with the owner’s initials in the unsubscribe header. From it, I was able to easily guess their actual name. I found them on LinkedIn! I used that, plus the list of all the domains they used for marketing and sending mails, to build a pretty comprehensive map of their operations.
I thought for a while about what I could do with this info… but in the end, reporting them to my country’s consumer rights authority for spam did the trick. No reason to get in trouble myself, as fun as it could be.
So the lesson is: look at email headers! There’s fun stuff in there!
https://ghostinfluence.com/the-ultimate-retaliation-pranking...
So I wrote https://github.com/arcuru/chaz to connect any LLM to Matrix so that I could use any Matrix client with any LLM.
That got me writing a Matrix bot account in Rust, and I couldn't find a simple bot framework so I split out a bot framework in https://github.com/arcuru/headjack
But _then_ I realized a really easy bot I could write that would also be very useful for me, and I wrote https://github.com/arcuru/pokem which is sort of a clone of ntfy.sh but using Matrix. Send yourself (or others, or a group) a ping on Matrix using an HTTP request or a simple CLI app.
[0]: https://matrix.to/#/%23thisweekinmatrix%3Amatrix.org?via=mat...
It's all sitting on my desk, first flight will likely be in May.
What sticks with me most through this experience is the brilliance of the open-source community (and a special satisfaction with now being able to say "I use nix btw")
Turns out getting particulates out of a solution is a massive, massive industry with a large body of science, literature, and engineering practice behind it.
EDIT: Here's a few wiki entries I found as OK overviews. ChatGPT was handy for figuring out what relevant literature in the field was and terminology I could use to find more pertinent resources:
1. https://en.wikipedia.org/wiki/Food_engineering
2. https://en.wikipedia.org/wiki/Ultrafiltration
3. Food Chemistry: https://www.amazon.com/Fennemas-Food-Chemistry-Srinivasan-Da...
4. Introduction to Food Engineering: https://www.sciencedirect.com/book/9780123985309/introductio...
5. Handbook of Food Engineering Practice: https://www.routledge.com/Handbook-of-Food-Engineering-Pract...
I originally used their spec, but have since learned milk punch is pretty forgiving. It works really well for complex flavors that have a lot of tannins or volatile constituents (tea, wine, citrus, etc). I’ve found good black tea and a deep, sweet port tends to be a winning combination.
Done correctly, the resulting punch is shelf stable. That said, it has a neat trick: since no filtration is perfect, you end up with trace amounts of milk fats in the solution that continue to react with any left over volatiles. This leads to a smoother, rounder flavor over time. The last batch I made with a bergamot tea and port ended up tasting like a fruity, complex boba tea after a couple months of rest.
This took me down a rabbit hole on current methods to detect seizure onset... I came across a very interesting journal article on applying ML in an implantable that can detect seizures within 3 seconds, which spurred my current research on less invasive detection methods. Like any good rabbit hole, I've strayed from the original mission.
Seizures seem scary and I don't want to give them to people, but the causes of their onset seem to be too nuanced and patient-specific to build with any guarantees. The best I can do is avoid the obvious and hope the cutting edge detection and mitigation research bears fruit.
1. https://developer.mozilla.org/en-US/docs/Web/Accessibility/S...
https://github.com/apple/VideoFlashingReduction
https://developer.apple.com/documentation/mediaaccessibility...
Apple users can dim flashing lights in settings
---
(https://news.ycombinator.com/item?id=35332531)
One of my Canadian friends was explaining that pretty much everyone chooses to do either hockey or figure skating from a young age with a small minority doing speed skating. She says every neighborhood has its own ice rink too. Also, apparently the women's league for hockey is becoming pretty popular in certain places, so that is cool. I'm guessing it's similar to that where you live?
In my area there was practically nothing, but recently a bunch of things got built like walking paths connecting the neighborhoods, a school, sports centers, a park not too far away...etc. I think it had an overwhelmingly positive impact on the community and only some of it came from tax dollars too.
Learning to skate is amazing. Once the motions start to “click” and you can start flying around the ice it is so much fun. Wearing full hockey pads makes the learning process a little easier on the body. Falls on the ice hurt!
Spring is a Java dependency injection system that uses XML-based configuration. Recognising that XML sucked, they later added 2 additional ways to specify configuration, which they call annotation-based configuration and Java-based configuration. Both kinds use annotations. Both kinds use Java.
Spring Boot is a layer on top of regular Spring that tries to make things simpler by automatically guessing what you're trying to do and configuring it for you, with something it calls auto-configuration.
Just trying to understand what makes what happen in a (very simple) Spring Boot app sucked weeks from my life.
I first discovered that there are tables (and tables and tables) of preflop card combinations that tell you whether to raise/call/fold based off of where you are in the dealing order and how many players are active.
Then I learn that's basically derived from monte carlo simulations, to calculate your chance of winning at any given moment (equity). It seems that it's probably more accurate to make your decision based off of equity and pot odds, i.e. Kelly Criterion. If I can get fast enough at estimating that in my head.
Also having fun trying to find open source libraries to do those simulations so I can create my own drilling exercises. Honestly, I'm having more fun doing that stuff than playing it.
The biggest dumbest problem is when blinds accelerate to the point that the minimum bet is greater than what Kelly would recommend. Some online poker sites are aggressive about this, to the point that you are forced to make irrational choices pretty quickly or get blinded out. A way to juice the house advantage I guess. It's almost enough to make me give up and find the next project.
The house does not have an advantage with poker. With player games, they earn money from hosting by taking vig(orish). But rapidly rising blinds in tournament poker is a way for them to get more games played, hence introducing turbos in tournament poker and Zone-type cash games.
Better understanding common betting patterns is more useful than hand simulators or whatever. Being able to accurately update your opponent's range throughout the hand is essentially how to play well.
I've been on a journey for a while to understand how to layout diagrams / graphs in an "aesthetically pleasing but structured" way. Long story short, DOT[0] is the best language I've found for defining graphs (compared to doing something with Mermaid.js or any other markup language), but rendering with the DOT engine in GraphViz fails the "aesthetic" test for me.
Did a bit of a literature review[1] to understand better the different approaches, and to understand the scope of the field. This book does great job of defining and providing the keywords for the different levels of requirements, starting with "principles" that are provable in the academic sense, to "conventions" that are like principles, but cannot be necessarily computed (eg NP hard, so requiring heuristics or simulations to achieve), and ending with actual "aesthetics" where things get very subjective.
Ultimately got pretty deep writing my own force-directed graph simulation in Rust and visualizing with egui[2] (needed an excuse to work on UIs and I've always wanted to write less Python), but I'm taking a break to use what I've learned writing Rust to shore up the REST API testing suite for my dayjob.
[0]: https://graphviz.org/doc/info/lang.html [1]: https://www.amazon.com/Graph-Drawing-Algorithms-Visualizatio... [2]: https://docs.rs/egui/latest/egui/
JellyFin. Amazing. A little clunky here and there, so some automation is needed. Then the weird bugs - gotta get a debug instance going to see what is going on. Once you start thinking about it as a "personal Netflix" you start building out a larger collection which needs organizing. Then your friends want an account. Then you realize your satellite tuner has a streaming function so you start reading through the plugin code for other streaming boxes...
Next one was the Japanese PS2 DVR combo unit. Sold as junk (which mostly they are - none can read discs anymore basically) but very interesting in that they are beautiful and cool and weird. The English OSD translations will often break your unit in subtle ways - so I joined the discord and identified that this was actually a configuration management problem - you have to only ever use a translation which was done to a set of OSD files which match your firmware revision. So I started writing a framework to auto-translate the XML file strings. Then the guys on the community mentioned the images with strings, as well as detecting when your translated string is too long for the UI element... I am still working on the framework. No dealbreakers yet! I will probably buy a few more to generate all my own OSD translations. My wife is going to kill me - I have 2 PSX DVR's now, and probably will buy more on our trip to Japan soon...
It's interesting to discover that a lot of 'facts' about public transport people take for granted just aren't true. The names and liveries by which vehicles go often don't actually correspond with their actual operators and owners. The company that's named on your ticket might not actually get any money from your purchase - or they might make money from passengers who don't pay at all, due to a myriad of subsidy schemes run by different levels of government.
Waves of privatization and re-nationalization with political motivations at every turn have produced a system which is amazingly efficient in some ways, and appallingly wasteful in others. Workforce strikes are obvious to the general public, but what's not obvious is who the negotiating parties even are, with various trade unions (and unions of unions) competing against various management groups (and groups of groups).
Some things are pleasantly surprising. Without any fanfare, digital systems for vehicle tracking have been introduced with remarkable efficiency. Then, for me, there's the astonishment of discovering that not only is every timetable published in a consistent, nation-wide data format, but one that has been utilised in production for twenty years!
It all makes me realise how limited the public discourse about public transport and 'green' mobility policies are in my region. It is simply impossible to grasp the true consequences of any given proposal in the meagre columns that they're given in the newspapers and the two-minute reports in which they feature on the television. Diving into this rabbit hole has led me to respect the complexity of the field much more than I did before, and fills me with both hope and despair on topics which I had hitherto scarcely lent a thought.
Wife and I live on a floating house in Oregon outside of Portland. Taking my paddle board to various beaches on the river to pick up trash has become a beloved hobby of mine. Fantastic way to get exercise, get out in nature, and clean up my community.
I kept finding old, antique bottles during outings: Coke bottles from the 40s and 50s, little medicine bottles from who knows when, old beer bottles from brands no longer in business. I eventually learned that there's a whole hobby around this called "bottle digging", even an active subreddit (r/bottledigging).
I've since bought a nice snorkel and mask and have been diving for old bottles on clear days, when river visibility allows it. Thinking about taking it to the next level and getting scuba certified.
If you like finding things, you might like bottle digging.
I've also learned how to make pendants out of them.
Although, if I were you, I'd be looking for old necklaces, watches, etc. in that river. It excites me to imagine finding some old gold coins or something.
We frequented a subdivision that was being built on land that was a former city dump.
Turning a thousand-page book - PAIP - into a stack of Markdown files in a git repo, readable online. The print book received more editing and revisions than the ebook. I converted an ebook's ... odd formatting ... into Markdown, remade diagrams, generated new ePub and pdf files, and had the spine cut off a print copy to make a fresh scan. Working on that scan, I made Scantailor, an X program, easier to access from a Mac, via Docker. I tried different OCR engines, and pored over the diffs, incorporating dozens (hundreds?) of improvements. I got to find so many differences between Markdown engines. I have ideas on how to make Pandoc links between chapters. There's still a lot to do!
My current WIP: Lars Wirzenius posted about file systems with a billion (empty) files. I started exploring because I was curious, if I was remembering correctly, how well a mostly empty image file would recompress - like, drive_image.gz.gz. Lars offered a Rust program; I was curious about how other methods compared. Like, how about nested shell loops, tar, and touch? And, hey, how well can we archive and compress them? I've gotten to see some issues, bottlenecks, and outright failure modes with SMR hard drives, Samba re: sparse files, and parallel gzip compression. I've accumulated some shell script boilerplate to make it easier to go back and verify my processes, and harder to accidentally wipe out past work if I rerun it.
Did some duolingo for french just recently and already did for latin two years ago. It’s difficult to read them but broad context is enough for an overview understanding and I feel a detailed understanding increase by the chapter from exposure alone.
Apart from the actual books I enjoy bidding, similar to but healthier than gambling on stocks, and “winners curse” of overpaying for stuff I really like eh I’m aware of it but the dopamine is worth it too. Some arbitrage opportunities as well, as in finds on local auctions and selling online internationally, but usually international shipping for heavier stuff like entire œuvres and just the time investment to pack and ship something deters me from doing that. Wouldnt want to damage 1600s books in shipping I suppose as well, they look quite cheap locally but fetch more internationally.
I love how many absolutely insanely interesting books from that period aren’t affordable. I have a copy of the book in which Bishop Ussher put forward his theory that the earth was made on "the entrance of the night preceding the 23rd day of October... the year before Christ 4004"; that is, around 6 pm on 22 October 4004 BC.
I use as a cautionary example of data driven analysis. It was all from lineages in the bible and ages mentioned there. Seemed like a solid methodology at the time!
- I’ve ditched VSCode and gone all-in on NeoVim. I’ve spent a bunch of time watching Primeagen, etc., tweaking my vid config and learning how to navigate as efficiently as possible.
- Switched from QWERTY to Colemak-DH to hopefully reduce RSI. I’m at about 70wpm with decent accuracy after 4 weeks. My QWERTY skills are gone. I like Colemak, but we’ll see how I feel in another month or two.
- Finished my custom hot swappable Sofle keyboard, and spent many hours customizing the layout. I think I’m pretty close to feeling comfortable. I’m using home row mods, which I love. Currently using Kailh box whites (clicky). Might switch to Gateron Brown Pros.
- Been going through a “Build your own git” course, to understand git as deeply as possible.
That’s encouraging to hear that you can switch between the two. Awesome!
I’m afraid to start practicing QWERTY too soon, and risk losing my progress with Colemak. Maybe I’ll attempt it in a few months.
I don’t own a 3d printer, so designing a custom Dactyl is not very feasible for me.
There are only two thumb keys per side. I’ve had to get a bit creative with my layout. One trick I’ve discovered is Mod-Tap. This lets me use my space bar as a layer key (when held), or a normal “space” when tapped. Two functions on a single key. Awesome.
I’ve also been reading this person’s blog to improve my symbol layer and vim navigation (I’m tempted to try the Engram layout, but I’ll stick with Colemak for now): https://sunaku.github.io/engram-keyboard-layout.html
Something is incredibly beautiful to me about classifying the kinds of symmetry things can have.
I’m trying to understand where the sporadic simple groups come from. Starting with the Matthieu groups. So far it seems to be due to some anomaly in Pascal’s triangle, but I’m still trying to put it together. “Another Roof” on YouTube has a good video about this.
I wanted to be able to do this cross platform, so I re-implemented ELF patching and Mach-O patching and adhoc signing in Python, and wrapped them into a tool called repairwheel: https://github.com/jvolkman/repairwheel
https://github.com/ZQuestClassic/ZQuestClassic
That guy was busy with other tasks, so iterating on firmware was too slow. So I decided to dive in. I mean I knew C a bit.
So I had to learn STM32 arm, I had to learn low level C, I had to learn assembly, I had to get some understanding of those electronics things to get some sense of it, I had to read tons of manuals and datasheets.
Long story short, I rewrote this PoC firmware into something I could bear. It's so nice to control all software from the start to the end.
Now our company wants to rework this device into "smart", add display with touchscreen and stuff. So I'm digging into embedded Linux programming, LoL.
I'm generally consider myself full stack developer, so I can write frontend, backend, kubernetes, setup servers, deal with cloud stuff. However digging that deep feels like testing my limits.
please don't do that. Add some kind of interface, eg bluetooth, and an open-source app, or open-source/documented protocol, to control it.
0: https://ajxs.me/blog/Introduction_to_Reverse-Engineering_Vin...
1: https://archive.org/details/Hitachi-DotMarixLiquidCrystalDis...
I've been attempting to add a oauth2 device code flow to a Tacacs server with the goal of extending Azure accounts to access network device management planes. Pretty neat, I can get a "enter this device at URI" from the router/switch and let Azure do it's 2fa/compliance etc. Currently trying to get token validation working on the tacacs server =).
Ultimate goal is have a reverse proxy web front end kind of like Apache Guacamole that does the Oauth for the user and when they click on a network device, the JWT is passed through to the network device over SSH and thus the tacacs server which is relatively local to the network device which will validate it and let the user into the network device.
Playing around with GPT4/Opus a lot lately and man... I have feelings. They've been a great learning tool to learn the basics of Go though so I'm thankful.
It's going swimingly /s but I seem to be making progress. Slowly, I'll bake this into my bigger network management tool if it an be secure and make sense to do so...
Add to the fact that you have to manually reload the unpacked chrome extension to apply your new changes, so I’ve hooked solution 1 up with a web socket server and a custom chrome extension that watches all your other extensions and talks to the web socket server to auto reload an unpacked extension anytime the build step completes
It’s a nightmare hack, but I may just be the worlds most productive chrome extension builder as a result! I’ve released 5 extensions in the last 5 months :)
About to release a big one, but it’s also probably the words most complicated chrome extension (LLM + Firebase store + Stripe + Auth + Serverless functions)
My custom workflow handles everything for me, I never need to restart the browser of anything like that
https://www.npmjs.com/package/vite-plugin-extension-reloader
The extension is of the same name
This weekend I was able to reach my home node from a state park 8.2 km away and have been giddy since.
[1] https://learn.microsoft.com/en-us/sql/tools/bcp-utility?view... [2] https://babelfishpg.org/
Fast forward into my first job, there was a Windows machine with a crashed HDD and my task was to recover as much data as I could. Windows tools all sucked hard, even the ones that we had at the company that cost thousands of bucks per month. That's when the power of the Linux ecosystem hit me.
Went on to being an IT forensics guy, then went into pentesting, then into blueteaming and now I am having my own startup that builds a better EDR software.
I still have to think about that coincidence. Literally nerd-sniped my life, otherwise I probably would have still been a sysadmin or something.
On the way I learned a ton of new things, from programming languages and compiler bugs to exploit techniques and kernel development and even hardware design. If you go deep enough in hardware pentesting, the whole phreaking scene is amazingly welcoming. The CCC chapters are also amazing, and there's just so many opportunities to grow your knowledge and experience in the field. It never gets boring!
- 3D-printable parts storage solutions (via: I found some part storage bins in the discard pile at a local hackerspace)
- MITM proxy to snoop on Github Copilot API requests (via: we're building an jupyter AI assistant thing and got curious how other players do it).
- DIY robot arms (via: I'm making several for a nested 'you pass butter' joke, via a casual conversation about robotics being accessible now. YouTube is amazing at surfacing smaller makers once you start watching a few videos on a given topic)
- Learning about Oauth and JWT (via: 'why is auth still a pain?')
- Invertebrate UV fluorescence (via: that millipede is glowing under my UV torch!)
(a small subset of these end up documented https://johnowhitaker.dev/all.html eventually if you're curious to see a longer historical list)
I like rabbit holes where following the curiosity gradient to a satisfying conclusion is possible. "How does X work" leads eventually to code that does X. I'm less happy when they lead into a tangle of complexity, like digging into a library only to find weird abstractions 6 layers deep or trying to compare 18 different alternatives in a field I don't know very well.
OP I'd also like to hear yours!
Today I gave some thought to what would be a fitting name for my boat (if I were to rename it).
One option: the glider pattern from Conway's Game of Life. Instantly recognizable by true hackers, just a weird symbol to others.
Of course a quick check on Wikipedia. Know that I'm always interested in things small / simple / computing, so... cellular atomata. Which led me to varieties used to simulate or help understand biological systems ("systems biology" - if only that field had even existed back when I left high school).
From there on: artificial life, Core Wars & co, self-replicating machinery, and... Astro-chicken (deserves a HN post of its own, imho).
Btw. it's amazing to see how many big, open questions there still are, related to the origins of (biological) life, and evolution. Eg. full simulation of a single cell organism: never been done (too complex).
Next up: a cup of hot chocolate.
Fast forward a few years... I just wrote a book on the timing topic, called Why Now: How Good Timing Makes Great Products.
I got into it through wanting some cheap radios to keep in my house, so I wouldn't have to go all the way upstairs when I needed to communicate with my daughter.
Well let me tell you Baofeng radios are extremely cheap but really flexible. I got these things for the simplest possible use case but after realizing their potential I just had to learn more about the space. You can adjust their configuration with a tool called Chirp and you're off to the races!
I attended a local severe weather awareness event where I met some hams who were part of an emergency response network. It's really cool to learn about how these communities operate. It's legal to receive even without a license - you only need the license to transmit.
I plan to take the technician test soon and get my license so I can help out at a nearby bike event. The area is incredibly rural so there's no cell coverage and the ham operators are really helpful in coordinating things.
Anyway, I feel like the hobby is a bit of a dying art, but it's something that seems like it would have a lot of appeal to the programmer crowd.
If you have a repeater in your area there may be a net where other licensed operators check in every week. I've heard people check in from over 60 miles before. You can check this website for repeaters in your area: https://mygmrs.com/repeaters. Good luck on your technician test!
Launch monitors come with their own software too and there are a few other options for simulating courses. E6 is one and GolfClub2019 is another. AwesomeGolf. GSPro is hard to beat though.
For launch monitors, there are two main types: camera based and radar.
Garmin offers the most affordable radar based option with R10 for around $600.
Bushnell offers an all camera model and for $2000 you get all the ball data. For a subscription fee you can play GSPro and other 3rd party golf apps using the Bushnell and for another fee you can get all the club data. This model sits beside the ball.
FlightScope has a launch monitor that operates with radar and/or camera and sits behind the player. For about $1800 you can get the ball data and its free to connect it to GSPro. For another $1200 you can get club data, with impact location. It's unreal how accurate this thing is. I've had mine for about a year, and since then they have pushed some incredible updates, including what they call "Fusion", which is combining the camera and radar for readings. Its how their really expensive 20k+ unit works.
At the very top end there are monitors that go on the ceiling and give you readings from there. And then there are commercial simulators where the floor will move up and down. It really never ends. One company showcased lighting from above that shows you where you should putt. It never ends. . .
https://gsprogolf.com/
https://www.garmin.com/en-US/p/695391
https://www.bushnellgolf.com/products/launch-monitors/launch...
https://flightscope.com/
https://www.foresightsports.com/pages/gchawk
https://uneekor.com/
I had never practiced golf in my life(late 30’s) and would play 5-8 rounds every summer for last 10 ish years. So I had a ton of room for improvement. I also grew up playing competitive tennis, so the repetition of hitting a lot of balls is familiar and scratches an ich I’ve longed for.
I’ve just learned so much about golf from simulating golf that I never would have taken the time to learn.
And I’m sorta anti lessons and more learn on my own. So the sim is right up my alley. It’s also great exercise. 330 cals in less than an hour and I can play 18 in less than 40 mins…
I think everyone agrees it makes you a great ball striker. At the very least. Then the rest is golf. Which can be very very very tricky, even for the best of players.
One broken engine and one non operational one and turning them into a single good motor. American thin-wall cast V8 engines are fairly similar, but different enough that if you don't get them built you have to do a bit of puzzle solving (especially in the timing case). Plenty of youtube videos and forum posts on the Cleveland and it's been fun piecing it back together and learning about new things like installing cam bearings.
Trying to compare good query plans with bad ones, and then work out what changes we need to make to the slow queries is ... interesting.
Ostensibly I wanted to be able to code on the production server like a miscreant with the same tools as my laptop.
However I just wanted to regain command of my dev environment after years not coding.
I also reorganized the furniture in my office and got weirder lighting to make it hacker friendly. I bought a new desk to solder electronics.
Most people know me as a partnerships marketer or product manager but I am a compsci at heart. This made me happy.
However I can salve my ego spending a day flipping through neovim colour themes.
https://github.com/sunir/NvChad
Overall I still think I am faster in sublime text. I get stuck in the different modes. I find shift select and grep to be pretty frustrating.
However I will muscle through this. Every challenge is another set of vim stuff to learn. I have faith I will love it later.
What was interesting is to see them piece the theory together from the very fragmentary and little evidence (what is even more mind-blowing to know how little evidence there was to play with given we are talking about a 747).
The true mystery left is how did he execute the suicide and why. How did he dispatch the co-pilot? (There was a very small window to do so). What did he do in the final hours (was he alive for the entire duration)?
I researched what topics are typically included in a 4-year math major university program and what textbooks are used to teach those topics at MIT. Then started grinding all the way through from beginning to end.
It was so awesome that upon finishing, I promptly started all over again... but with physics instead.
DOI (Digital Object Identifiers) are used by many modern research papers as sort of a UUID for papers, run by doi.org.
But they're discipline-specific. So they're used widely by certain disciplines. But others use different databases.
So for biology-related papers, NIH's PubMed ID. Or for Astronomers, Bibcode.
All are "global" identifiers and each has some kind of consortium that's trying to make theirs the One ID. DOI seems to be the closest.
There are several registries, crossref is the big one in the west but it's not the only one. They have probably the best access to the data out of all of the larger registries though.
Dois are pretty good, though not persistent and there's no versioning built in so people have their own formats.
I spent a lot of time working with these as part of https://dimensions.ai for a decade or so. Happy to chat if you want to delve in more.
I know you guys work with OrcID too right?
We work with CrossRef to get data but if a DOI is missing, then things get harder to find in CrossRef in our experience.
I saw my grandparents go through 10 years of doing all the retirement things(beach, hobbies, social hour etc) to keep them more than busy. Then they kind of did it all and just started having social hour earlier and earlier every year into their 80s. Cocktail our started about 11:30 am in the end.
I'm starting my retirement and have so many interest/subjects I want to learn so feel like I can stay pretty entertained for more than 10 years but maybe not.
It's not like I'm sitting around on my ass all day. I walk the dogs, bicycle, rock climb indoors, and travel. I'm starting Tai Chi and just bought an e-drum kit. After I retired I spent a year renovating one house and have spent a lot of time this winter on another. Also lived off-grid in a forest for three summers and am doing it again this year. Off-grid ain't easy. Can't just turn on the faucet and presto hot water.
A while back I had bought a domain for my email, and I thought “I should write a blog about creating a blog”. At first I hosted it in GitHub Pages, but then I realized I have a perfectly good Raspberry Pi. It’s not like I’m ever gonna get a lot of traffic… so why not self-host?
That sent me into a very deep rabbit hole. How do I make sure my website doesn’t go down if my IP address changes (no static IPs for me, sadly)? How do I create and automatically renew a certificate? How do I achieve high availability?
A few years passed, and now I have a cluster of a few Raspberry Pis running Docker Swarm, managed by Portainer, with stacks running multiple websites and services I self-host. I’ve learned a lot!
My next move is going to be a full overhaul: Docker Swarm is blocking me from setting up some things the way I want to, so I want to build a new cluster using Kubernetes. I’ll use the opportunity to overhaul the network layout as well.
The funniest part is that I haven’t written a single blog post in 3 years. I wanted to add responsive images so I could add diagrams and photos. Somewhere along the way I realized I shaved too many yaks.
Cloudflare tunnel would solve the issue of static IPs and you also get DDoS mitigation and caching. Caching on the edge would be especially beneficial for something like a blog which is likely to be fully static or SSG at most. Although it won't help you with the writing part (;
You’re right that it’s a good idea, though. I’ll look into it!
https://www.youtube.com/watch?v=5mmISldi060
I realise now that I've spent most of my career just shaving yaks.
My current rabbit hole is tuning my home's boiler to be more efficient in its use of gas. It is an interesting engineering problem because the lack of feedback loop since the thermostat is dumb and specific home variables makes smart thermostats useless as well as boiler sizing more complex than most installers understand. My goal is to add some features specific to my home to reduce gas consumption based on more variables - outside temp, minimum outside temp, sun/clouds and home variables like brick wall temp, fireplace heat, boiler controller settings as well as the wifes 'I'm cold' variable.
I am using a microcontroller and custom current switch as well as IOTSTACK to send inlet/outlet temp and gas valve & circulator state (on/off) to influxdb/grafana so I can see what is happening between thermostat and boiler controller. I have identified a few freebies in terms of consumption and inefficiencies. I have added a relay to delay the gas valve once the boiler starts cycling to reduce "short cycling" which is a waste of gas on startup and a mini explosion every time gas lights. I have managed to reduce cycles in half which helps with wear and tear as well as the number of boom sounds coming from my boiler room :)
I would love to go down the simulink rabbit hole but I think I will not.
What do you mean my launch.json file is missing? It was there yesterday?
Wait, I can set up custom launch settings in my launch.json? What else?
Ok, so I've got seven different launch settings in there, and now to see if I can have one used for markdown for my markdown word editor.
Oh, neat, lots of extensions for markdown.
Wait, you can install vim?
An hour later, and I've completely re-broken my VSC and am reinstalling from scratch.
For those who are curious, it is not one single law that makes this the case, which is why many people will tell you that it's still allowed. However, with the combination of privacy laws forbidding the use of aerial cameras, UAV regulations limiting the locations you can fly, limitations on certain radio bands as well as specific rules about maintaining line-of-sight, it is not usually possible. How could it work? Perhaps if you live in a rural area with no nearby air traffic, have permission and access to use a large patch of land (such as a farm) from which to take off, a willing partner to maintain line of sight (which of course precludes many of the stunts often associated with competitive flying) and you have done the paperwork. Not really hobby territory for most people at that point :(
Is it possible to start super cheap just to see if I like it and then upgrade?
Then to start in real life, complete kits from BetaFPV or GepRC are around $200 (including drone, rc and analog goggles); you can find them used for about half that, in excellent condition.
But there is NO POINT in trying to fly an actual drone before doing plenty of hours on a simulator: you would crash constantly and destroy the drone before you even get started. So just start on a simulator. 10 hours is the absolute minimum you'll find everywhere, but I'd recommend around 50 (you can listen to podcasts at the same time).
If you want to go the extremely cheap route you can start with a cheap simulator (FPV Freerider, $5) or even a free one (FPV SkyDive?) and use an existing gamepad -- but gamepads really are confusing and don't work like RCs (the throttle joystick should not center automatically).
Spent WAY too much time adapting a Buddhist sutra into a heavy metal banger: https://www.youtube.com/watch?v=H-5Y9Z7DK4s
Now I'm trying to abandon 30 years of muscle memory and typing at 4 wpm while I learn Colemak-DH. Maybe what I should really do is build a custom 34-key board...
[0] https://github.com/benhamad/blog/blob/main/2024-04-12-dramal...
I'm building a paperweight inspired by vintage brass table lamps to hold the papers in place on a wooden platform.
The REAL rabbit hole is the astounding amount, and quality, of AUv3 plugins for iOS. Sounds, effects, looping tools, MIDI things, just... wow. And almost all of them are under $20, and many are free! I've spent less on a dozen software toys than on the first two guitar pedals I got. And infinitely more powerful.
Check out this video of someone doing the looping thing way way way better than I'll ever be able to (but it's fun to work towards a goal). Software she's using is called Loopy Pro, another amazing thing:
https://www.youtube.com/watch?v=T1O0pwUMbnw
Its almost certainly a rabbit hole but at least I’ve forewarned myself.
I also looked at vueforms and surveyjs, the builders are not free though.
Electric Bikes: Hub Motors and Mid drives are a really great spring / summer rabbit hole to go down. So many form factors of ride and you can also kill two birds with one stone by going for 60V lawn care equipment (There's adapters on eBay to connect them to your bike).
- Designed/built a small USB controlled pan/tilt camera head to control the mirrorless I use as a webcam (couple of servos, gears, belts), and then designed/built a custom ortholinear keyboard with a joystick to control the camera (custom PCB, CNC'd aluminum case, etc)
- I'm a pretty big runner, built my own web based calendar UI that integrates with Google Calendar where I can type in workouts like "1 mile warmup @z2 + 5x(30 seconds @ 6:00/mile + 0.5 miles recovery) + 1 mile cooldown" and this gets parsed/total weekly mileage gets tallied. The next step down this rabbit hole is building a small iOS app to automatically generate Apple Watch Workouts using WorkoutKit.
The reason why I want to learn more about it is I feel it is like a base building block of distributed systems and may be easier to grok and even write a toy version than a bigger thing like kubernetes or a leaderless distributed datastore. I would also learn some go and know how a critical piece of kubernetes works.
What led me there is practicing for a damn system design interview. As much as this whole topic is controversial on HN the grinding has really got me curious about the tech that runs at larger scales and how it works under the hood.
I just got the box finished up over the weekend and it's working really nicely so far. I went with an MSI MEG x670e "Godlike" motherboard, AMD Ryzen 9 7950X CPU, MSI Gaming Radeon RX 7900 XTX GPU, and 64GB of DDR5 RAM. The whole thing is running Ubuntu 22.04 (I was going to use Alma Linux, but the Alma installer wouldn't even start), ROCm 6.1.0 and Ollama. So far I've mostly been working with the LLM stuff using Ollama and the llama3 model.
Now I've started using Spring AI to interface with the Ollama API. Next steps: figure out "function calling" with Ollama (which doesn't seem to be supported by Spring AI yet, boo) and some "agentic workflows" and multi-agent stuff...
It started with an article about the hypothesis that planet nine may be a primordial black hole with 3-6 Earth masses.
What’s a primordial black hole? It’s one that formed in the first seconds after the Big Bang. We don’t know for sure they exist but many theories and simulations predict them.
They’re an excellent dark matter candidate. Could it be that simple? Could at least a lot of the missing mass be tied up in little baseball sized embers from the birth of the universe that rarely interact with anything so we don’t see them? They’d be small, would rarely interact, and unless they are sucking in mass (causing a hot accretion disk) would be dark.
Then I got onto Hawking radiation and whether micro black holes could exist. Along the way I read about loop quantum gravity (LQG) which looks to me like a decent stab at unifying QM and GR that’s much less baroque and more testable than string theory.
That then led to the LQG “bounce” hypothesis for black holes. See LQG does away with true infinite mass singularities. Instead a black hole would be matter packed to its theoretical maximum density (which is still insane). From there it would quickly “bounce” and become a white hole.
So wait… how do black holes persist then? Time dilation! From a the hole’s frame of reference it collapses and then instantly bounces and goes kaboom. From our frame of reference though all that gravity slows it to such a crawl that the black hole phase at or near max density looks like it’s stable. The bounce takes billions to even trillions of years!
Last but not least I learned about the black hole starship idea. It’s a set of ideas about how far future intelligences could use black holes as mass energy converters to reach relativistic velocities. Might be somewhat easier (for crazy sci-fi values of “easy”) than handling antimatter. This also gives SETI yet another wild extreme technosignature to look for.
… and back to the beginning I found a post about how if planet nine were a PBH we could use it to yeet probes to the stars at meaningful fractions of c… at least if we could make them able to survive insane g forces. Unlike the black hole starship this would be feasible today. It’d just be a gravity assist off a ludicrous gravity well.
Here I thought black holes were dull. Turns out they’re the most extreme objects in the universe and a whole lot of the most amazing physics intersects around them. If there is any way we could tap into the phenomenon we could potentially access sci-fi levels of energy too.
The bounce idea is super neat because it feels less “magical” than a true singularity. A black hole is just a whole lot of mass stuck in a time dilation tar pit… from our frame of reference.
There are other implications too. From what I read LQG may allow stable micro black holes due to quantum effects dominating at small mass, naked singularities (well not true singularities but regions of off the charts mass energy concentration not hidden behind an event horizon), and Hawking radiation subject to quantum spectral effects similar to how emission spectra work.
It also resolves the black hole information paradox. All the information just bounces back out. Easy.
It's been a while since I've worked on a CRUD app so I'm finding the whole thing quite interesting. The purpose of the app is to solve a scheduling problem.
I've written my own CDCL SAT solver (now just using google or tools), and on the app side I've jumped from Phoenix (elixir) -> Dream (ocaml) -> axum (rust) -> Django. I feel like Phoenix probably perfectly suits what I'd like to do with this app (long running tasks and collaborative editing) but I'm at the point where I want to support this app long term and I don't see me not being familiar with python anytime soon.
Bookbinding has fascinating details.
Crazy how familiar and yet different things are.
That's exactly the kind of topics they explore.
It's fun to reach a point where things just run forever at zero expense.
In parallel, I work hard on developer experience just for myself. I finally get the greybears and their keyboard incantations. No UI can beat scripts and muscle reflex.
an amusing AI generated origin that entertains the humans sufficiently well, while solving some frictions the humans encountered
basically somebody launched a crypto token and rugged it for some quick cash. it was a hit because their code, coaxed on and perhaps entirely generated by ChatGPT, did fun things:
- issues an NFT from a collection whenever you buy a whole token, but not when buying fractions
- transferring the NFT transfers the whole token
- while selling a non-whole unit of the token irreparable burns the NFT
community said “no that was awesome lets keep doing that” but original dev could only think about doing more rugs, so some in the community made Pandora token as the new reference implementation and pushing this draft standard’s development along
in reality it solves some frictions with token + Nft launches, any NFT collection launch, and multiseries management within 1 contract. Something ERC1155 tried to achieve in a more complex way. now people can launch NFT collections directly from an AMM’s liquidity pool like Uniswap. No direct boutique websites or NFT marketplaces needed to trade the NFT or create liquidity. And has a fun incentive for holding on to the tokens.
so far in practice, communities make separate markets for the NFT's with them trading at a premium to the token, as the NFTs in the limited size collection gets burned over time by sellers and has its own scarcity attributes while the fungible token trades at a different lower price, allowing for roundtrip arbitrage. Automatic community with the arbitrage features. ie. a Pandora NFT trades at $7,000 and comes with 1 token. While the token market alone trades at $5,800.
Guardrails: No longer do we have to use the AI to fight AI, AWS has presets for blocking inappropriate content going into or out of the AI block.
Evaluation: Basically just give it a JSONL file of all the inputs and expected outputs, and compare two models or judge the quality of a model. This can be done by a human (it even manages the access of this team of humans). But more interestingly, it can be automated or done by AWS workers.
Knowledge base and agents: Seems like it's only for Claude, but damn, Claude has been impressive for a good price. Claude Sonnet (the mid-range) has been about as good as GPT-4 for half the price. The weakness has been things like knowledge base and AWS lets us just quickly set up vector DBs and embeddings. Then the agents feature lets us connect with it and combine it with all of the above.
I first wrote it the dumbest way possible, one big array with padding at the back. Worked fine actually for most modern use cases, but as this is also a learning experience, I want it to be best in class in performance.
I think I'll settle on gap buffer because the performance is great and it doesn't hurt my head.
An online game I play includes an optional two player Russian Roulette type feature (non-fatal). I got to wondering if there was an optimal betting percentage to use, if you set aside some money as a betting seed. So I spent time coding up a really ugly brute force "just run lots of games and see".
Pretty much the answer is you'll lose more often than you win, looks like your best bets are around 2% of whatever money you have left of your betting money.
If you play 75 games, at 2% of your betting pool, you'll come out ahead only about 49.8% of the time.
There's more efficient ways of working that out than I bothered to do, which was to create a basic abstraction for a gun. For example, your odds of winning is essentially 50%, given two players. For every "game" I simulated, I could have just picked a random integer between 0 and 1 instead. Faster and the same effect.
As best as I could find, there are no good betting strategies on a coin toss (which is what this really is)
This is usually called https://en.wikipedia.org/wiki/Monte_Carlo_method
https://en.m.wikipedia.org/wiki/Kelly_criterion
p = 0.5
q = 0.5
b = 1.0
0.5 - (0.5 / 1.0) = 0.0
More than just using some formal language/tool i had wanted to learn about the Mathematics/Ideas behind formal methods and how they are embodied in TLA/Z language/B method/etc. After a survey of available books i zeroed in on Understanding Formal Methods by Jean François Monin hoping to get an overall idea of how everything comes together. But what i got was a fire-hose/mishmash of so many different sub-fields/notations/abstractions used in the field that it is quite a struggle to get a good grasp on anything. The author's style of writing is obtuse/challenging and the contents are more of a survey/introduction than detailed explanations.
The result is that i am now interested in figuring out a whole lot of mathematical/logical sub-fields which i suspect is going to occupy a lot of my time in the future.
Intermediate step - I feel pretty confident with the gory details of the kernel code now. can possibly build a custom kernel, boot qemu with both a simple C+assembly bare metal kernel, or the self compiled kernel. I feels like the clouds have cleared and I can see the sun. - Incidentally the kernel source code is pretty well documented, but one thing which is missing is a much smaller list of files which are most important. true pareto here. 20% files carry the weight. You also need to know the subsystem you want to touch. Chances are that subsystem is much lesser number of files.
Finally - Got to reading about kernel packet handling. at the L2/L3/L7 level. from nic hardware to userspace. Turns out that eBPF [hello old friend!] has a networking avataar called XDP which is pretty recent [<5 years] way of doing high performance networking on the linux kernel. Along the way, got to know about network performance optimizations specially in modern multicore systems in the kernel like RPS/RSS/aRFS, DPDK/fd.io/VPP.
And now I feel the itch to apply this to some of our networks. Particularly, baremetal servers on equinix metal + aws ec2 + azure can be peered with either VPP/Bird to make a p2p connection which is a factor more performant than the vpc interconnect/gateways which are provided off the shelf.
I might extend the holiday by a few days. and I would love to talk to people who have hands on experience with any of this. Its hard to contain my excitement tbh.
I play with io_uring and multithreading. I am looking at event loops.
https://github.com/KaliedaRik/Scrawl-canvas/pull/75
My one, at the moment is precision time keeping. Time nuts. I have a pile of oven controlled crystal oscillators, GPSs (even the full Ublox time specific LEA M8).
I got a BG7TBL counter and multiple cheap GPSDOs to test my own.
I have a DAC1220 20 bit DAC on an ESP32 disciplining a TCXO from an old phone base station by counting the 10Mhz using the PCNT and gated off the GPS 1PPS.
Meantime, I learnt the esp-idf so I could have more control over things. Everything done with the esp-idf is way way more stable than using the Arduino wrapper, no idea why, maybe later versions?
The disciplining/tracking parameters are exposed by http and mqtt and put into influx.
I have a 5.5 digit multiple meter (I repaired the classic HP 3478A). Maybe I need another digit, there goes another rabbit hole. Voltnuts.
https://www.sweetwater.com/c953--Synchronizers
https://en.wikipedia.org/wiki/Black_and_burst
https://www.sweetwater.com/insync/black-burst/
https://www.ebay.com/sch/i.html?_from=R40&_trksid=p2334524.m...
Might be interesting to convert an audio precision word clock into a precision computer clock. Audio atomic clocks are available ($7000):
https://en.antelopeaudio.com/products/10mx/
Without GPS, yes a rubidium oscillator would be fun to have and the phase noise is may be better than the TCXO disciplined oscillator, but the system I have with the LEA-M8T with a good external antenna and used in the 'base station surveyed in' mode is more stable. I've been keeping an eye on ebay for old one at a reasonable price. They do wear out so it's a risky buy.
The black burst stuff is funny. I used to work in TV as a kid. The plate on my car was PAL-443
[1] https://news.ycombinator.com/item?id=39601201
1. Looked at what it would take to turn it into a sort of "pubnix" for some friends
2. ...which got me looking into how to set up Postfix to manage local emails (allegedly this works out of the box, but I must have screwed something up since I never did get my test messages from one user to the other)
3. Then on to looking at BBS systems, starting from Enigma 1/2. Didn't get too far into that since the theme customization scared me away (and not enough of my RL friends are nerdy enough to get into it)
4. Finally backing away from the pubnix thing again because of insufficiently nerdy friends (although one is humoring me in experimenting with SSB), I then instead set up a Synapse server to have my own identity in the Matrix ecosystem.
It's more fun than I thought because whipping up every cocktail you've ever heard of is actually quite simple. There's also some cool generalizations to uncover so that you can Pareto principle your way to knowing how to do almost everything with a few tricks and ~10 liquids. I could have someone knowing more than 2/3 of bartenders (most are just a job / basics) in an evening. Then once you've got a good handle on all the basics, there's an endless adventure into variations and history. There are a lot of flavors out there you wouldn't believe existed; many concoctions of herbs and botanicals. It's a great social activity too.
I havent tried it yet for home server stuff. I am still running containers on a Proxmox host.
Nix documentation is bad/incomplete, so I helped myself with some Youtube videos to get started.
This might get you started https://jvns.ca/blog/2024/01/01/some-notes-on-nixos/
I've developed my own film in the past but knowing so little about chemistry myself, it's still pretty much magic to me even after digesting all of the info from the series.
https://www.youtube.com/watch?v=YE9rEQAGpLw
I've been looking at digital backs for old cameras such as the blad I have taking dust. Sadly, either they're completely impractical or they're way above my budget. Hopeful that some day, something better than a polaroid back can be used to resurrect my old hassleblads.
I wanted to be able to easily distinguish between the 60+ figurines of the Zombicide board game, so I figured I could paint them "quick and dirty".
Well 8 months later, I'm not finished because I "had to" learn about paint, color theory, paint mixing, human vision, brush types...
Being colorblind I gave no attention to colors around me, but I have since discovered I can see more shades than I was aware, and I'm having a blast just looking at the foliage... Which does not speed up the painting!
Otherwise, ThePrimagen pisses me off always talking about Neovim and Vim motions, so I am right now in the painstaking process of learning Vim motions and I want my life to end because of the learning curve.
Ended up getting a simple rust function built, with some slight miscomplilations via wasm + wasm2c. Now I'm going to try to get graphics working.
There is a surprising amount of public code containing calls to sony's licensed SDK, if you know what to search for (Not to mention the SDK, which was, "obtained" dubiously). Fascinating stuff
I was playing with w2c2 last week to try and get some stuff running on Mac OS X Leopard, but I couldn't quite get it working :(.
Watched a couple videos on making AI to play video games, looked into localai and gptpilot, failed to get it to write a QT application. Played around with tts and stt, and now I'm stuck in prompt hell with diffusers/dreamshaper.
(It's mostly there, but I feel like it's screwing with me because it always finds a way to screw up the picture.)
I don't currently use it in any serious projects aside from tinkering about with it, but it has been a lot of fun to learn and study.
Between the Nix package manager, the associated language, etc., there has been a lot to learn about, and it's been good fun. I have nixOS on my spare Thinkpad for toying with, and I have Nix on my main Debian systems, if I want to pull something from nixpkgs.
* Turns out the work is not 'weird' or 'Gnostic' but is directly addressing details from Lucretius, including paraphrasing his view of evolution and atomism, but refuting the claim there's no afterlife by basically appealing to the idea we're in a simulated copy of an original physical world where the spirit doesn't actually depend on a body, because there is no actual body.
* As I dug more into the various mystery religions the followers of the work claimed as informing their views, I saw a number of those were associated with figures various Greek historians were saying came from the same Exodus from Egypt as Moses.
* Turns out a lot of the ahistorical details in the Biblical Exodus narrative better fit the joint sea peoples and Libyan resistance who end up forcibly resettled into the Southern Levant latter on. In the past decade we've also started finding early Iron Age evidence of Aegean and Anatolian settlement and trade previously unknown in the area, including in supposed Israelite settlements like Tel Dan, lending support to the theory that Dan were the Denyen sea peoples.
* Also turns out that in just the past few years a number of Ashkenazi users have been puzzled by their genomic similarity to ancient DNA samples, where the closest overall match in a DNA bank was 3,500 year old Minoan graves sequenced in 2017 or that they have such a high amount of Neolithic Anatolian (which the 2017 study found was effectively identical to Minoan).
* The G2019S LRRK2 mutation that's almost only found among the Libyan Berbers and the Ashkenazi appears to have originated with the former but appeared in the ancestry of the latter ~4,500 plus/minus 1k years. Which is a window that predates the emergence of the Israelites in the first place, but is on the cusp of the sea peoples/Libyan alliance.
* There's also been discovery of endogamy among some of the Minoan populations. Did the Ashkenazi endogamy evidenced from their emergence in Europe and the bottleneck in the first millennium CE actually go back much further than we've been thinking? Maybe Tacitus wasn't so off base when he talked about how some claimed the Exodus involved people from Crete hiding out in Libya.
Anyways, that's a very rough summary of some of the rabbit holes I was going down.
Bonus: Herodotus's description of Helen of Troy spending the whole time in Egypt has two datable markers to the 18th dynasty, which is when Nefertiti, "beautiful woman who arrived" is around during a complete change to Egyptian art and religion while she's the only woman in history to be depicted in the smiting pose, with her only noted relatives being a sister and wetnurse.
Sounds a bit gnostic no?
You had this first century response to Epicureanism's naturalism as a foundation. In that paradigm, the Platonist demiurge recreating the physical world before it was an agent of salvation, liberating the copies from the certainty of death from the Epicurean original.
What happens is that Epicureanism falls from popularity over the second century, so in parallel to the increased resurgence of Platonism, Plato's forms becomes the foundation instead. For Plato, there was a perfect world of the blueprints of everything, the corrupted physical versions of those forms, and then the worst of all was the images of the physical. So the Thomasine salvation by being in the images of physical originals is through that lens corruptive.
So as the foundation shifted from the Epicurean original world of evolution (Lucretius straight up described survival of the fittest in book 5) to Plato's perfect forms, a demiurge creating a copy of what predated it shifted from being a good thing to trapping people in a corrupted copy.
For the first 50 years of the discovery of the Gospel of Thomas, it was mistakenly thought to be Gnostic. This changed at the turn of the 21st century with the efforts of Michael Allen Williams and Karen King, and it's now labeled as "proto-Gnostic." It's absent a lot of the features typically associated with 'Gnosticism' though that term in general should be retired as it's turned out that there isn't any single set of beliefs to be considered 'Gnostic' in the first place (this was the chief realization of scholars over the past twenty years).
I'm pretty sure that there was no 'Thomas.' My guess is that the philosophy of being in a twin universe and a twin of an original humanity ends up anthropomorphized by or before "doubting Thomas" in John and ends up credited with the tradition making those philosophical claims which was also denying the physical resurrection.
In the Gospel of Thomas itself, there's only two mentions of a 'Thomas,' both likely later additions. Moreso the work features him having female disciples and discussions directly with them, and the only later tradition following it claimed a female teacher named Mary as the starting point of their sect.
The Gospel of Thomas is a collection of sayings, and that core may have gone by different names before the 2nd century when it's rolled up in a more secretive context as attributed to 'Thomas' (despite the core itself seemingly being more anti-secretive than any other texts in the early Christian tradition).
Despite leaving a bug ticket with my library, I received no response after a month. I struggled to debug this library within a library. One day I realized the rabbit hole I was in and switched over to Tornado and no problems since.
Using semantic-router for dynamic conversation routing, and LiteLMM for model providers.
It was lots of fun to learn and build. I will be adding function calling support (tools use) for the models to have it more capable, like an agent, in the future.
Thanks in advance.
I think this does a good job too: https://www.goodreads.com/en/book/show/50284837
I would also recommend The People's Forum / The Socialist Program's recent class on Lenin: https://m.soundcloud.com/thesocialistprogram/sets/lenin-and-...
One person brake fluid replacement / bleeding procedure after replacing a brake master cylinder. For my car, I can now do this without taking the wheels off. I'll be done within a couple of days and after that, I'll be kind of an expert.
I've actually proved it several times...except for the insignificant detail that I glossed over that didn't seem important but tanks the proof.
Someday I'll have to publish my "book of lemmas that don't prove the collatz conjecture."
Some history podcasts had me digging into the Napoleonic Wars and Israel/Palestine.
Also a recent interest in human health and diseases has basically sent me down the path of self-study equivalent to a Kinesiology/Exercise Science/Sports Physiology degree.
This is more than a rabbit hole, it's a fractal that changes as you zoom in and out
I knew about most of them a little or fair bit already, but there's always something to learn the deeper you go :)
Been tinkering with that for an absurdly long time vs just throwing some python onto a $5 dollar VPS but it’s been fun & learned a decent bit about over engineering pitfalls along the way
Somehow and unintentionally as the search began from some random article I’d read, this seemingly unrelated subject ended up uncovering some insights into a problem with deduping database rows I’d been working on for another project.
Margaritas -> Jello Shots -> Chimoy/Tajin rim/topper -> Pop Rocks -> History of Pop Rocks
ChatGPT gave some nice code, but it wouldn't work at all, good ideas though
I then set up a VAX 11/780 running OpenVMS 7.3, which I can telnet into from outside. ;-)
Even today, I'm seeing conflicting reporting of quotes. For example the CBC has one article that quotes the police chief:
> Outside court, however, Toronto Police Chief Myron Demkiw struck a different tone. "While we respect the judicial process and appreciate the work of everyone involved in this difficult case, we were hoping for a different outcome," he said.
https://www.cbc.ca/news/canada/toronto/umar-zameer-acquittal....
However, the video of the Chief's statement is slightly different (and quoted correctly in another article):
> "While we respect the judicial process and appreciate the work of the 12 citizens who sat on a very difficult case, I share the feelings of our members who were hoping for a different outcome," Demkiw said.
https://www.cbc.ca/news/canada/toronto/umar-zameer-verdict-1...
The first quote cuts out essential information and does not note that the quote was not verbatim (as presented). It's especially problematic as the quotes might have subtle but different interpretations now that his suggestion of "hoping for a different outcome" is being addressed. (There's a subtle difference between 'sharing feelings with someone that hopes for a certain outcome' and 'hoping for a different outcome' -- and this could be major if it's seen as the police services suggesting the wanted the man, who was declared innocent, to have been convicted.)
(One other possibility is that the Chief did in fact mention both lines, and that they are so similar because it was based on his official public statement).
It's a dangerous game when news articles are rewriting content pulled from news wires (eg, Canadian Press in this case) and it's not always clear which sections are pulled from where. There's an argument for legitimate AI-rewriting that can at least leave in direct sources for the content that it pulls in/summarizes (a task that ends up being a hindrance for deadline-facing humans).
I later ported the game to the ZX Spectrum, because that was a fun challenge, and I only needed a few basic I/O operations - "write to screen", "read a line of input", etc, etc.
It occurred to me that I could reimplement the very few CP/M BIOS functions and combine those implementatiosn with a Z80 emulator to run it "natively". So I did that, then I wondered what it would take to run Zork and other games.
Slowly I've been reimplementing the necessary CP/M BDOS functions so that I can run more and more applications. I'm not going to go crazy, anything with sectors/disks is out of scope, but adding the file-based I/O functions takes me pretty far.
At the moment I've got an annoying bug where the Aztec C-compiler doesn't quite work under my emulator and I'm trying to track it down. The C-compiler produces an assembly file which is 100% identical to that produced on my real hardware, but for some reason the assembler output from compiling that file is broken - I suspect I've got something wrong with my file-based I/O, but I've not yet resolved the problem.
TLDR; writing a CP/M emulator in golang, and getting more and more software running on it - https://github.com/skx/cpmulator
I could not derive a single piece of solid science in any of it.
It was remarkable how much content there was on this subject with little to no actual information - enjoy:
https://www.youtube.com/@MFMP
In case you want to have unlimited fun yourself, ask yourself: "What is the purpose of X?" and then "how can you measure/assess the fit-for-purpose of it?"
Possible Side-effects: #1 you might get disgusted and even angry with the self-declared "experts" who have not even understood the basic concepts.
#2 you might learn how little you understand yourself and how deep the rabbit hole goles.
Example in Software Development: Understand the quality dimensions for a "definition of ready" and what impact a good/used DoR has compared to a bad/not used DoR for the efficiency and effectiveness of a software development process.
Long story short, I was inspired by the Super Mario 64 and REDRIVER2 decompilation projects and wanted to do one. I picked a PlayStation video game from my childhood, started Ghidra and then I quickly realized that the game code's a complete mess. It's bad enough that I don't see myself ever finishing this project unless I can somehow divide-and-conquer this problem into manageable pieces. But you can't exactly break a program into pieces... can you?
So I've started to think for a bit and remembered the basic toolchain workflow: source files are compiled into assembly files, which are assembled into object files, which are all linked together into a program. The last bit stood out to me and I wondered: what if I could undo the work of the linker? I'd get a bunch of object files, dividing the original reverse-engineering problem into smaller pieces.
I searched online and found absolutely nothing on the topic. That should've tipped me off, but instead I started scribbling on a piece of paper. Object files are made up of sections (named arrays of bytes), symbols (named offsets within these sections) and relocations (spots to patch with a symbol's address). The linker lays out the sections, computes the addresses of the symbols and then patches the relocation spots to produce the program. I can't just take the program bytes and stuff them into object files because of these applied relocations, but if I could somehow undo them...
The good idea fairy struck, and the fairy struck hard.
I'm writing scripts in Jython and after a couple hundred lines I get results on sample test cases. I try them on the game and it takes forever due to algorithmic complexity. I rewrote a new implementation in Java, forking Ghidra in the process. I rewrote it a couple more times because my analyzer kept hitting edge cases. I built an elaborate and exhaustive test harness because I keep introducing hard to track down regressions. I submitted a couple of pull requests to Ghidra to solve some painful points and reduce the size of the diff, which spanned thousands of lines. I reply to the questions from the Ghidra team with walls of texts trying to explain my use-case, but the PRs get rejected because they don't fit the current design of Ghidra well.
When the Ghidra team rejected my stuff probably because at this point I was probably speaking in the native language of Cthulhu, I really should've taken the hint.
Instead, I spin off my fork as a Ghidra extension to alleviate the maintenance burden, which by now was getting closer to ten thousand lines. I keep rewriting my MIPS relocation analyzers again and again to improve their correctness, always hitting a new edge case. I've decided to start a blog, because I'm tired of trying to explain this stuff from basic principles to people since there's no literature on this topic. I get side-tracked writing a complete series of articles on the basics of reverse-engineering to introduce the topic. I get side-tracked again writing a series of articles on the applications of delinking related to software ports, with a case study on a x86 program that requires me to write relocation analyzers for this architecture and perform refactorings to support multiple ISAs and object file formats.
I'm finally back on reverse-engineering the video game that started all of this and get side-tracked once more because I'm documenting the process in another series of articles. By sheer luck I stumble upon a SYM debugging symbols file, but I don't have the matching executable for it, so I build a placeholder one that matches its shape, then import the placeholder into Ghidra, then write about a thousand lines of Java to import this data on top of the placeholder, then write a bunch of scripts and my own correlators to version track it onto a executable I do have because Ghidra doesn't know what to do with a source executable that doesn't have a single initialized byte to its name. I've tried to engage with the Ghidra community about this latest problem, but no answer. I assume they're probably busy trying to find an exorcist, so I carry on regardless.
Two years. Two years I've spent digging this rabbit hole that's probably worth a thesis or two. I know enough about delinking now that I could probably write a book that would read like a Lovecraftian horror story to people that develop linkers for a living. I've automated this stuff down to making a selection inside Ghidra and clicking on "File > Export Program...", but there's only so much you can do to make accessible or even understandable a technology that allows you to literally rip out code from a Linux program and shove into a Windows program or from a PlayStation game into a Linux program and have it work, in spite of ABIs or common sense.
TL;DR I've developed a reverse-engineering technique by accident that would give professors teaching Computer Sciences 101 an existential crisis.
The problem is properly identifying the relocations spots and their targets inside a Ghidra database, which is based on references. On x86 it's fairly easy because there's usually a 4-byte absolute or relative immediate operand within the instruction that carries the reference. On MIPS it's very hard because of split MIPS_HI16/MIPS_LO16 relocations and the actual reference can be hundreds of instructions away.
So you need both instruction flow analysis strong enough to handle large functions and code built with optimizations, as well as pattern matching for the various possible instruction sequences, some of them overlapping and others looking like regular expressions in the case of accessing multi-dimensional arrays. All of that while trying to avoid algorithms with bad worst cases because it'll take too long to run on large functions (each ADDU instruction generates two paths to analyze because of the two source registers).
Besides that, you're working on top of a Ghidra database mostly filled by Ghidra's analyzers, which aren't perfect. Incorrect data within that database, like constants mistaken for addresses, off-by-n references or missing references will lead to very exotic undefined behaviors by the delinked code unless cleaned up by hand. I have some diagnostics to help identify some of these cases, but it's very tricky.
On top of that, the delinked object file doesn't have debugging symbols, so it's a challenge to figure out what's going wrong with a debugger when there's a failure in a program that uses it. It could be an immediate segmentation fault, or the program can work without crashing but with its execution flow incorrect or generating incorrect data as output. I've thought about generating DWARF or STABS debugging data from Ghidra's database, but it sounds like yet another rabbit hole.
I'm on my fifth or sixth iteration of the MIPS analyzer, each one better than the previous one, but it's still choking on kilobytes-long functions.
Also, I've only covered 32-bit x86 and MIPS on ELF for C code. The matrix of ISAs and object file formats (ELF, Mach-O, COFF, a.out, OMF...) is rather large. C++ or Fortran would require special considerations for COMMON sections (vtables, typeinfos, inline functions, default constructors/destructors, implicit template instantiations...). Also, you need to mend potentially incompatible ABIs together when you mix-and-match different platforms. This is why I think there's a thesis or two to be done here, the rabbit hole is really that deep once you start digging.
Sorry for the walls of text, but without literature on this I'm forced to build up my explanations from basic principles just so that people have a chance of following along.
But let me stick with some basics. So, I can write and compile an x86 test.c program. Then, I use your extension and undo the linking. Then, I use the results to link again into a new executable? Are the executables identical? When does it break?
How much of a task is it to make it a standalone program? What about x64 support?
There are links in the README of my Ghidra extension repository that explain these use-cases in-depth on my blog, but as a summary:
- You can delink the program as a whole and relink it. This can port a program from one file format to another (a.out -> ELF) and change its base address.
- You can delink parts of a program and relink them into a program. This can accomplish a number of things, like transforming a statically-linked program into a dynamically-linked one, swapping the statically linked C standard library for another one, making a port of the program to a foreign system, creating binary patches by swapping out functions or data with new implementations...
- You can delink parts of a program and turn them into a library. For example, I've ripped out the archive code from a PlayStation game built by a COFF toolchain, turned it into a Linux MIPS ELF object file and made an asset extractor that leverages it, without actually figuring out the archive file format or even how this archive code works.
You can probably do even crazier stuff than these examples. This basically turns programs into Lego blocks. As long as you can mend them together, you can do pretty much anything you want. You can also probably work on object files and dynamically-linked libraries too, but I haven't tried it myself.
> Are the executables identical?
Probably not byte-identical, but you can make executables that have the same observable behavior if you don't swap out anything in a manner that impacts it. The interesting stuff happens when you start mixing things up.
> When does it break?
Whenever the object file produced is incorrect or when you don't properly mend together incompatible ABIs. The first case happens mostly when the resynthesized relocations are missing or incorrect, corrupting section bytes in various ways. The second case can happen if you start moving object files across operating systems, file formats, toolchains or platforms.
> How much of a task is it to make it a standalone program?
My analyzers rely on a Ghidra database for symbols, data types, references and disassembly. You can probably port/rewrite that to run on top of another reverse-engineering framework. I don't think turning it into a standalone program would be practical because you'll need to provide either an equivalent database or the analyzers to build it, alongside the UI to fix errors.
> What about x64 support?
Should be fairly straightforward since I already have 32-bit x86 support, so the bulk of the logic is already there.
I encourage you to read my blog if you want to get an idea how this delinking stuff works in practice. You can also send an email to me if you want, Hacker News isn't really set up for long, in-depth technical discussions.
I'm mostly using this delinking technique on PlayStation video games, Linux programs from the 90s and my own test programs, so I'm not that worried about security implications in my case. If you're stuffing bits and pieces taken from artifacts with questionable origins into programs and then execute them without due diligence, that's another story.