41 comments

  • sureglymop 10 hours ago
    This is great. I want this but for much more. I want it to also be a nextcloud and zotero replacement, storing all my documents and books and documenting when I added, opened, edited them. I want it to store all notes that I write. I want it to record and display all browser tabs I open, when I do so, everything I copy and paste, every key I press. I want a record of everything I do in the digital world that is searchable and that can answer the question: "what was I working on 2 weeks ago on this day?" and bring back all the context also.

    For obvious reasons this has to be self hosted and managed. I'm not interested in creating surveillance software or technology.

    It sounds extreme but whenever I have seen peoples obsidian set ups with heaps of manual and bidirectional linking I always thought that time is the one thing we should look at. If I look up some concept on wikipedia today, there is a higher chance of me looking up related concepts or working on something related to that around this time also.

    • hn_acc1 8 hours ago
      I think Microsoft has some kind of product that can help you Recall what you were working on?
    • michaelterryio 3 hours ago
      I can never find it now, but someone had an idea for a computing system which was purely temporal for every object and then you'd only access outside of temporal by filter.

      I wish I could find it again.

    • mholt 9 hours ago
      > I want it to also be a nextcloud and zotero replacement, storing all my documents and books and documenting when I added, opened, edited them. I want it to store all notes that I write.

      Sounds in-scope so far. Long-term, perhaps, and maybe optional add-on features rather than built-in, but we'll see.

      > I want it to record and display all browser tabs I open, when I do so, everything I copy and paste, every key I press.

      That is possible in theory, but for me personally that's just too detailed. :D I wouldn't need all that granularity, myself.

      But hey, the vision is pretty similar. We are generating all sorts of data to document and understand our lives -- we don't even have to deliberately write a journal -- but we have no way of comprehending it. This app is an attempt to solve that.

      • LelouBil 4 hours ago
        I feel like nextcloud replacement is out of scope ?

        I mostly felt like Timelize was about being *behind* data-generating applications and showing and cross-referencing their data when reading the website.

        I think the way is to do some sort of nextcloud extension that puts data into Timeline.

        I also saw it tracks "documents" on the website but I didn't try it yet, and I would hope it can use external document sources that are already processing documents like paperless for example (which I am already using and liking)

        • mholt 1 hour ago
          Well, I don't plan on 1:1 feature parity with NextCloud or any comprehensive cloud suite. But I think in terms of what was mentioned: "storing all my documents and books and documenting when I added, opened, edited them. I want it to store all notes that I write," I think that's in scope.

          So yes, Timelinize sits behind your current work flows. It's more of a resting place for your data (which you can still organize and curate -- more features to come in this regard), but I also can see why it might make sense to be one's primary photo library application, in the future, with more development.

          As for document storage, this is still WIP, and the implementation could use more discussion to clarify the vision and specifics.

      • sureglymop 5 hours ago
        > That is possible in theory, but for me personally that's just too detailed. :D I wouldn't need all that granularity, myself.

        I do see that. I think for that reason it would be cool to support a kind of extension system for arbitrary "collectors". And then solid filtering to filter data.

        The vision is definitely similar. I am very pleasantly surprised to see your project. And I also like your ideas/roadmap on the website. I know you are building this for yourself/your family but I certainly would be open to contribute to it.

        • mholt 1 hour ago
          Wonderful, I'd love to collaborate!
          • ramses0 1 hour ago
            Dig deep on "dogsheep" from Mr Willison. We're all circling on the same orbit here.
      • ignoramous 8 hours ago
        Timelinize looks rad. Congratulations.

        > That is possible in theory, but for me personally that's just too detailed. :D I wouldn't need all that granularity, myself.

        Think this can go quite far with just the browsing history & content of viewed webpages.

        • mholt 8 hours ago
          Thanks! Yes, I agree. Someone already implemented a Firefox history data source; I don't think it includes the _content_ of the pages, but that could be interesting.
  • chrisweekly 11 hours ago
    Oh yeah, mholt is notable for having created Caddy (the webserver). My interest in Timelineize just went up.
  • Jarwain 55 minutes ago
    Oh I love this oh so much. It lines up with a lot of things I've wanted or am planning. So I'm definitely going to take a dive into your source for inspiration

    One thing I wish for encompasses the idea that I might want to share different slices of my life with different groups

    • mholt 32 minutes ago
      Sharing is planned. Long term roadmap but definitely on my list!
  • BubbleRings 1 hour ago
    Great work. Various ideas here:

    You might suggest to users the following use case. “If you want to create a Timelinize data store, but don’t feel that your own local systems are secure enough to safely hold a basket with a copy of every egg in your life, you might consider the following use case, which some of our customers implement. Once our twice a year, update the data store, but store it on an external disk. When the update is done, take the disk offline and keep it in a drawer or safe.”

    Also

    I always wondered how cool it would be if I could tell some Spotify-like system, “I’m 20 miles away from the lake, we are going to stay in the cabins a week, just like we did 10 years ago. Play me the exact same songs now that played at every turn back then.”

    Also

    For a name, how about: ChronEngine That name seems pretty free and clear from previous use, if you like it grab ChronEngine.com before some squatter does and thank me with a phone call, I would enjoy a quick chat with you.

    Also

    Your web page might benefit from a grid that lists all the input sources you accept, with hotlinks next to the names that give a pop-up quick summary of what that source is about, and maybe some color coding that shows something like “green = rock solid lately”, “yellow = some users reporting problems”, “red = they just updated their format, it’s broken, we are working on it”. You will/are facing challenges similar to Trillian, a chat client from the early 2000s that would try to maintain ongoing connection compatibility with multiple other chat clients such as AIM/ICQ/MSN. Also, the grid could have a “suggested source sets” filter that helped people find what 5 (for example) input sources they might select for their use style.

    Oh and make a list of anybody that says they have done an elaborate something similar with Excel (like me and at least one other person in this thread) and maybe have a discussion with them some time, we/they might have some useful insights.

    Let’s hear it for people on the opposite side of the “go fast and break things” coin! My first project took 16 years. My current one I started 28 years ago!

    • BubbleRings 54 minutes ago
      And one more idea from me. I could see your current system to be deliberately kept like it is, but offered as a two-part system, where the second part is, “once you get your timeline built and you have reviewed it carefully, then click here and the local-only LLM will have access to the whole thing.” The two part nature could be a big competitive advantage, helping people to carefully build a LLM system that, for instance, they could then offer to their whole family to peruse, without having to worry too much that it accidentally included information that it should not have.
  • akersten 11 hours ago
    This is an amazing idea but do I have to run Google takeout every time I want to update the data[0]? Unfortunately that's such a a cumbersome process that I don't think I'd use this. But if my timeline could update in near real time this would be a killer app

    [0]: https://timelinize.com/docs/data-sources/google-photos

    • mholt 11 hours ago
      Yeah. Major thorn in my side. I spent hours trying to automate that process by using Chrome headless, and it kinda worked, until I realized that I needed to physically authenticate not just once, but every 10 minutes. So, it basically can't be automated since 2FA is needed so often.

      In practice, I do a Takeout once or twice a year. (I recommend this even if not using Timelinize, so you can be sure to have your data.)

      • whistle650 11 hours ago
        I thought you could set up an automatic Takeout export periodically, and choose the target to be your Google Drive. Then via a webapp oauth you could pull the data that way. Frequency was limited (looks like it says the auto export is “every 2 months for 1 year”). So hardly realtime, but seems useful and (relatively) easy? Does a method like that not work for your intentions?
        • mholt 11 hours ago
          Will have to look into that. Sounds like it could be expensive but maybe worth it.
          • robinwassen 7 hours ago
            You can schedule the takeout to Drive, then use a tool such as rclone (amazing tool) to pull it down.

            It should not add any costs except the storage for the takeout zip on drive.

            Look at supported providers in rclone and you might find easy solutions for some hard sync problems: https://rclone.org/#providers

            • mholt 7 hours ago
              > except the storage for the takeout zip on drive.

              Yeah, that's the cost I'm talking about. It essentially amounts to paying an extra subscription to be able to download your data [on a regular basis].

              I'm a big rclone fan btw :) I'm sure there's some future where we do something like this to automate Takeouts.

      • akersten 11 hours ago
        Some kind of companion app that runs on my phone and streams the latest data (photos, location history, texts, etc ) back to the timeline would probably be more tractable for live updates. But that is probably a wildly different scope than the import based workflow. This is very cool regardless.
        • mholt 11 hours ago
          For sure.

          About 5-6 years ago, Timelinize actually used only the Google Photos API. It didn't even support imports from Takeout yet. The problem is the API strips photos of crucial metadata including location, and gives you nerfed versions of your data. Plus the rate limits were so unbearable, I eventually ripped this out.

          But yeah, an app that runs on your phone would be a nice QoL improvement.

          • apitman 3 hours ago
            Is takeout the only way to get the original photos out?
            • mholt 2 hours ago
              As far as I know, yes. ("Original" is a strong word, but it's pretty close if you don't have space saver enabled / pay for storage.)
        • _flux 10 hours ago
          Syncthing from phone to a directory on PC?

          That's what I do. Though I don't then put them into any system. Yet.

      • sroussey 11 hours ago
        I did this by creating my own small password manager.
      • clueless 11 hours ago
        How easy would it be to integrate this with immich (instead of needing the access to google photo)?
      • airtonix 8 hours ago
        [dead]
  • kylecazar 4 hours ago
    "Because Timelinize is entity-aware, "it can project data points onto a map even without coordinate data. If a geolocated point is known for an entity around the same time of others of that entity's data points, it will appear on the map."

    In the context of Timelinize, this is great! Outside of that, this sentence really drives home how much data Google could join on me -- a heavy Android/Chrome/Gmail/Maps w/ timeline user.

    What are you planning with weather? Associating entities that have a known location with historical temp/forecast data?

    • mholt 3 hours ago
      Something like that, yeah. Augmenting public data sets like weather or news to add context to your timeline.
  • totetsu 34 minutes ago
    for many years I kept Tiny Travel Tracker running on my android phone, and would periodically import it into os&m maps. It was nice to have the record of all exactly all the places I had wandered to, and not also share that with five eyes. https://f-droid.org/en/packages/com.rareventure.gps2/
  • dav43 1 hour ago
    I have been recording my gps every few mins for the past 10years for this exact product.

    Looks interesting.

    • zdc1 1 hour ago
      Out of curiosity, how have you been doing this?
  • whacked_new 2 hours ago
    Super interested in this as well (and thank you for Caddy)

    How does this handle data updating / fixing? My use case is importing data that's semi structured. Say you get data from a 3rd party provider from one dump, and it's for an event called "jog". Then they update their data dump format so "jog" becomes subdivided into "light run" vs "intense walk", and they also applied it retroactively. In this case you'd have to reimport a load of overlapping data.

    I saw the FAQ and it only talks about imports not strictly additive.

    I am dealing with similar use cases of evolving data and don't want to deal with SQL updating, and end up working entirely in plain text. One advantage is that you can use git to enable time traveling (for a single user it still works reasonably).

    • mholt 1 hour ago
      Glad you like Caddy!

      > How does this handle data updating / fixing?

      In the advanced import settings, you can customize what makes an item unique or a duplicate. You can also configure how to handle duplicates. By default, duplicates are skipped. But they can also be updated, and you can customize what gets updated and which of the two values to keep.

      But yes, updates do run an UPDATE query, so they're irreversible. I explored schemas that were purely additive, so that you could traverse through mutations of the timeline, but this got messy real fast, and made exploring (reading) the timeline more complex/slow/error-prone. I do think it would be cool though, and I may still revisit that, because I think it could be quite beneficial.

      • whacked_new 58 minutes ago
        Thanks for the reply! I'll have to try this out... it almost looks like what perkeep was meant to become.

        One interesting scenario re time traveling is if we use an LLM somewhere in data derivation. Say there's a secondary processor of e.g. journal notes that yield one kind of feature extraction, but the model gets updated at some point, then the output possibilities expand very quickly. We might also allow human intervention/correction, which should take priority and resist overwrites. Assuming we're caching these data then they'll also land somewhere in the database and unless provenance is first class, they'll appear just as ground truth as any other.

        Bitemporal databases look interesting but the amount of scaffolding above sqlite makes the data harder to manage.

        So if I keep ground truth data as text, looks like I'm going to have an import pipeline into timelinize, and basically ensure that there's a stable pkey (almost certainly timestamp + qualifier), and always overwrite. Seems feasible, pretty exciting!

  • Tepix 10 hours ago
    Nice project! If you don't like "timelinize" - have you looked at latin names? Perhaps something like Temperi?

    In terms of Features, i'd like to see support for FindPenguins. A lot of interesting data (photos, videos, GPS coordinates, text) is already there.

    • mholt 9 hours ago
      A few latin names have been suggested, but nothing has stuck. The problem is they are usually difficult to spell and pronounce, which isn't really an improvement over the current situation :)

      FindPenguins is cool! I don't use it myself, but anyone is welcome to implement a data source for it.

  • aetherspawn 5 hours ago
    This seems like the perfect thing to mix with financial records (ie bank feeds) and a local LLM.

    I’m not sure what you’d use it for … exactly … but it could probably reconcile and figure out all your credit card charges based on your message history and location, allocate charges to budgets, and show more analytics than you’re probably interested in knowing.

    People with a cloud connected car such as a Tesla could probably get some real “personal assistant” type use cases out of this, such as automatically sorting your personal and business travel kms, expenses and such for tax purposes.

    There’s probably other use cases like suggesting local experiences you haven’t done before, and helping you with time management.

    • mholt 2 hours ago
      Yeah, I hear this a lot, I would love to have my financials and an LLM integrated as well!

      Could make for a very interesting/useful/private personal assistant.

    • ramses0 1 hour ago
      ledger.txt (plaintextaccounting.org), g-cal integration, and Home Assistant are all so close to each other.
  • codethief 7 hours ago
    This looks really cool and like something I've been subconsciously looking for!

    A couple thoughts & ideas:

    - Given the sensitivity of the data, I would be rather scared to self-host this, unless it's a machine at home, behind a Wireguard/Tailscale setup. I would love to see this as an E2E-encrypted application, similarly to Ente.io.

    - Could index and storage backend be decoupled, so that I can host my photos etc. elsewhere and, in particular, prevent data duplication? (For instance, if you already self-host Immich or Ente.io and you also set up backups, it'd be a waste to have Timelinize store a separate copy of the photos IMO.) I know, this is not entirely trivial to achieve but for viewing & interacting with different types of data there are already tons of specialized applications out there. Timelinized can't possibly replace all of them.

    - Support for importing Polarsteps trips, and for importing Signal backups (e.g. via https://github.com/bepaald/signalbackup-tools ) would be nice!

    • mholt 7 hours ago
      Great comment, thanks for the questions.

      > unless it's a machine at home,

      This is, in fact, the intended model.

      The problem with any other model, AFAIK, is that someone else has access to your data unless I implement an encrypted live database, like with homomorphic encryption, but even then, I'm sure at some places it would have to be decrypted in memory in places (like, transcoding videos or encoding images, for starters), and the physical owner of the machine will always have access to that.

      I just don't think any other way of doing it is really feasible to truly preserve your privacy. I am likely wrong, but if so, I also imagine it's very tedious, nuanced, error-prone, and restrictive.

      (Or maybe I'm just totally wrong!)

      > - Could index and storage backend be decoupled, so that I can host my photos etc. elsewhere and, in particular, prevent data duplication?

      I know this is contentious for some, but part of the point is to duplicate/copy your data into the timeline. It acts as a backup, and it ensures consistency, reliability, and availability.

      Apps like PhotoStructure do what you describe -- and do a good job of indexing external content. I just think that's going to be hard to compel in Timelinize.

      > Support for importing Polarsteps trips, and for importing Signal backups (e.g. via https://github.com/bepaald/signalbackup-tools ) would be nice!

      Agreed! I played with Signal exports for a while but the format changed enough that it was difficult to rely on this as a data source. Especially since it's not just obvious what changes, it's encryption so it's kind of a black box.

      That said, anyone is welcome to contribute more data sources. I will even have an import API at some point, so the data sources don't have to be compiled in. Other scripts or programs could push data to Timelinize.

      Just to reiterate, one of the main goals of Timelinize is to have your data. It may mean some duplication, but I'm OK with that. Storage is getting cheap enough, and even if it's expensive, it's worth it.

      • codethief 6 hours ago
        Thanks for your thoughtful response!

        > I just don't think any other way of doing it is really feasible to truly preserve your privacy. I am likely wrong, but if so, I also imagine it's very tedious, nuanced, error-prone, and restrictive.

        It's certainly not easy but I wouldn't go as far as saying it requires homomorphic encryption. Have you had a look at what the Ente.io people do? Even though everything is E2E-encrypted, they have (purely local) facial recognition, which to me sounds an order of magnitude harder (compute-intensive) than building a chronological index/timeline. But maybe I'm missing something here, which isn't unlikely, given that I'm not the person who just spent a decade building this very cool tool.

        > It acts as a backup, and it ensures consistency, reliability, and availability.

        Hmmm, according to you[0],

        > Timelinize is an archival tool, not a backup utility. Please back up your timeline(s) with a proper backup tool.

        ;)

        I get your point, though, especially when it comes to reliability & availability. Maybe the deduplication needs to happen at a different level, e.g. at the level of the file system (ZFS etc.) or at least at the level of backups (i.e. have restic/borgbackup deduplicate identical files in the backed-up data).

        Then again, I can't say I have not had wet dreams once or twice of a future where apps & their persistent data simply refer to user files through their content hashes, instead of hard-coding paths & URLs. (Prime example: Why don't m3u playlist files use hashes to become resistant against file renamings? Every music player already indexes all music files, anyway. Sigh.)

        > Especially since it's not just obvious what changes, it's encryption so it's kind of a black box.

        Wouldn't you rather diff the data after decrypting the archive?

        > Just to reiterate, one of the main goals of Timelinize is to have your data. It may mean some duplication, but I'm OK with that. Storage is getting cheap enough, and even if it's expensive, it's worth it.

        I suspect it will lead to duplication of pretty much all user data (i.e. original storage requirements × 2), at least if you're serious about your timeline. However, I see your point, it might very well be a tradeoff that's worth it.

        [0]: https://timelinize.com/docs/importing-data

        • mholt 2 hours ago
          Correct, Timelinize is not a backup utility, but having the copy of your data in your timeline acts as a backup against losing access to your data sources, such as Google Photos, or your social media account(s), etc. As opposed to simply displaying data that is stored elsewhere.

          But yes, I think the likes of what Ente is doing is interesting, though I don't know the technical details.

          Data content hashes are pretty appealing too! But they have some drawbacks and I decided not to lean heavily on them in this application, at least for now.

  • sdotdev 7 hours ago
    Nice work. I’ve been frustrated with how closed off location history tools have become lately. This looks like a solid step toward giving people real ownership of their data again. Definitely checking this out.
    • mholt 2 hours ago
      Thank you. Yes, I feel the same!
  • bun_at_work 10 hours ago
    Hey - this is awesome. I've been working on a small local app like this to import financial data and present a dashboard, for the family to use together (wife and I). So yeah - great work here, taking control of your data.

    I'm curious about real-time data, or cron jobs, though. I love the idea of importing my data into this, but it would be nicer if I could set it up to automatically poll for new data somehow. Does Timelineize do something like that? I didn't see on the page.

    • mholt 9 hours ago
      Cool, yeah, the finance use case seems very relevant. Someday it'd be cool to have a Finance exploration page, like we do for other kinds of data.

      Real-time/polling imports aren't yet supported, but that's not too difficult once we land on the right design for that feature.

      I tinkered with a "drop zone" where you could designate a folder that, when you add files to it, Timelinize immediately imports it (then deletes the file from the drop zone).

      But putting imports on a timer would be trivial.

      • ramses0 1 hour ago
        I have something set up at home: ~/Inbox and ~/Outbox. Anything dropped into ~/Outbox gets rsync'd to rsync.net and mv'd (locally) to ~/Inbox.

        Anything in ~/Inbox is "safe to delete" because it's guaranteed to have an off-site backup.

        Presumably a fancy management app would queue (or symlink) the full directory structure into ~/Inbox (which would then behave as a "streaming cache")

        ~/Inbox would effectively be available (read only) on "all machines" and "for free" with near zero disk space until you start accessing or pulling down files.

        I use Dropbox to manage ~/Sync (aka: active, not "dead" files).

        "Outbox", "Inbox", and "Sync" have been the "collaboration names" that resonated the most with me (along with ~/Public if you're old enough, and ~/Documents for stuff that's currently purely local)

  • ObscureScience 4 hours ago
    This reminds me of Perkeep, but I understand this is more focused on the presentation of the data, while Perkeep was on the data storage. But maybe they could be integrated, or at least support each other's formats.
  • TheTaytay 10 hours ago
    I really like the local storage of this. Files and folders are the best!

    (When noodling on this, I’ve also been wondering about putting metadata for files in sidecar files next to the files they describe, rather than a centralized SQLite database. Did you experiment with anything like that by any chance?)

    • mholt 9 hours ago
      Why sidecar metadata files? In general I've tried to minimize the number of files on disk since that makes copying slow and error-prone. (A future version of Timelinize will likely support storing ALL the data in a DB to make for faster, easier copying.) We'd still need a DB for the index anyway, which essentially becomes a copy of the metadata.
      • dav43 1 hour ago
        Don’t know if you have seen the work DuckDB is doing on ducklake. Maybe there is an overlap in vision for versioning data across multiple data sources - and similar to SQLite it’s not proprietary and easily drilled down on. I’m sorry, don’t have technical knowledge :/
  • rixed 7 hours ago
    Like others I really like the idea and the realisation looks great too!

    I might not be the typical user for this, because I'd prefer my data to actually stay in the cloud where it is, but I'd still like to have it indexed and timelined. Can timelinize do this? Like, instead of downloading everything from gphoto, youtube, bluesky, wtv, just index what's there and offer the same interface? And only optionnaly download the actual data in addition to the meta-data?

    • mholt 7 hours ago
      That's not really aligned with my vision/goals, which is to bring my data home; but to be clear, downloading your data doesn't mean it has to leave the cloud. You can have your cake and eat it too.

      The debate between importing he data and indexing external data is a long, grueling one, but ultimately I cannot be satisfied with not having my data guaranteed to be locally available.

      I suppose in the future it's possible we could add the ability to index external data, but this likely wouldn't work well in practice since most data sources lock down real-time access via their API restrictions.

  • mhamann 11 hours ago
    Cool idea. Thanks for sharing. I was really annoyed by the way Google nerfed the maps timeline stuff last year. Obviously this project is way more ambitious than that, but just goes to show you how little Google cares about the longevity of your data.
  • hexagonwin 3 hours ago
    Looks awesome! Btw, it would be nice if it can also support audio files, I frequently make multi hour long recordings (meetings, presentation etc) and it would be very useful to see pictures and other stuff from a specific time together with the audio from that timing.
    • mholt 2 hours ago
      Oh it actually does support audio files :) Haven't tested them in a couple years but a player should appear.
  • coffeecoders 9 hours ago
    Love the grind! One suggestion would be to add a demo link with some test data so we can see it in action.

    I am also slowly "offlining" my life. Currently, it is a mix of synology, hard drives and all.

    I have always thought about building a little dashboard to access everything really. Build a financial dashboard[1] and now onto photos.

    [1] https://github.com/neberej/freemycash/

    • mholt 9 hours ago
      A live demo would be great, but I'm not sure how to generate the fake data in a way that imitates real data patterns. That's originally how I wanted to demo things, but the results weren't compelling. (It was a half-hearted effort, I admit.) So I switched to obfuscating real data.

      FreeMyCash looks great! Yours is the second financial application I've heard of; maybe we need to look at adding some finance features soon.

      • mh- 3 hours ago
        Hmm, I bet you could find appropriately-licensed datasets on Kaggle or HuggingFace that you could repurpose for that.
  • wizzard0 9 hours ago
    Wow that's great! Interesting if it's possible to use not just a folder but like a s3-compatible backend for photos and for db backups as well

    (I don't think all my photo/video archives would fit on my laptop, though the thumbnails definitely would, while minio or something replicated between my desktop plus a backup machine at Hetzner or something would definitely do the thing)

    • mholt 9 hours ago
      I don't think sqlite runs very well on S3 file systems. I think it would also be insufferably slow.

      I even encountered crashes within sqlite when using ExFAT -- so file system choice is definitely important! (I've since implemented a workaround for this bug by detecting exfat and configuring sqlite to avoid WAL mode, so it's just... much slower.)

      • msylvest 2 hours ago
        Would https://litestream.io/ perhaps help?
      • wizzard0 8 hours ago
        Definitely not sqlite-on-s3! Just for the photos and videos, and the periodic db backups
        • mholt 8 hours ago
          I see... that might make it hard to keep all the data together, which is one of the goals. But I will give it some thought.
  • ChrisbyMe 11 hours ago
    Very cool! I have a sketchy pipeline for exporting my data from Gmaps to my personal site and always thought about building something like this.

    This could be a really interesting as a digital forensics thing.

  • kevinsync 7 hours ago
    As for branding, IMO you could go a bunch of directions:

    Timelines

    Tempor (temporal)

    Chronos

    Chronografik

    Continuum

    Momentum (moments, memory, momentum through time)

    IdioSync (kinda hate this one tbh)

    Who knows! Those are just the ones that fell out of my mouth while typing. It's just gotta have a memorable and easy-to-pronounce cadence. Even "Memorable" is a possibility LOL

    -suggestions from some dude, not ChatGPT

    • kevinsync 7 hours ago
      Dateline (with the Dateline NBC theme song playing quietly in the background while you browse your history and achievements)
    • petepete 5 hours ago
      Scribe? As in the person who writes the timeline.
    • infogulch 7 hours ago
      Momenta
  • TheTaytay 10 hours ago
    Sounds really cool. I’ve been wanting something like this. Kudos for building it!

    I don’t see the link to the rep on on first glance of the linked site, so linking it here: https://github.com/timelinize/timelinize

  • asciii 4 hours ago
    Oh man, this is awesome. I wanted an answer to Apple's Journal because I personally am using Obsidian (doing my best to tag and ref data). But this can be what I need to access that metadata layer and combine everything
  • BinaryIgor 11 hours ago
    Interesting; how easy is it to backup it up somewhere - yes, on a cloud for example - and then restore/sync it on another machine? Is the data format portable and easy to move like this?
    • mholt 11 hours ago
      Yep -- a timeline is just a folder with regular files and folders in it. They're portable across OSes. I've tried to account for differences in case-sensitive and -insensitive file systems as well. So you can copy/move them and back them up like you would any other directory.
  • junon 11 hours ago
    I had this same idea for a long time. Even took github.com/center for it (I've since changed how it's being used). Cool to see someone actually achieve it, well done.
  • NKosmatos 7 hours ago
    Nice one, thanks for sharing. For sure I’ll give it a try.

    Have you thought of creating a setup so as to package all libraries and dependencies needed? You have a very nice installation guide, but there are many users who just want the setup.exe :-)

    • mholt 7 hours ago
      Thank you! Not sure I can package everything because of license requirements. IANAL. I think the container image basically automates the various dependencies, but I didn't create it and I don't use it so I'm not 100% sure.

      Basically, I would love to know how to do this correctly for each platform, but I don't know how. Help would be welcomed.

  • LelouBil 5 hours ago
    This looks so cool ! I will try it ASAP !
  • mtyurt 9 hours ago
    Very nice project! Curiousity question: since you are taking data dumps once-twice a year, and let's say you also copy the photos as well, do you do any updates incrementally or just replace the old one with new dump?
    • mholt 9 hours ago
      Timelinize doesn't import duplicates by default, so you can just import the whole new Takeout and it will only keep what is new.

      But you have control over this:

      - You can customize what makes an item a duplicate or unique

      - You can choose whether to update existing items, and how to update them (what to keep of the incoming item versus the existing one)

  • LatticeAnimal 10 hours ago
    Beautiful app. Surprised to see JQuery for your frontend; brings back good old memories.
    • mholt 10 hours ago
      Ha, thanks it’s actually AJQuery, just a two line shim to gain the $ sugar. Otherwise vanilla JS.
  • john_minsk 6 hours ago
    Amazing project. In the era of AI I can see the software like this being used daily.
  • willwade 7 hours ago
    Yeah I totally want this. How much data are we talking about on average?
  • throwrb 7 hours ago
    For a name, how about 'Rain Barrel' ? Your own personal cloud
  • phito 10 hours ago
    That's great, I've been running a timeline of my life in excel, I wonder if this could replace it.
  • coolelectronics 10 hours ago
    I've been basically doing this for years via a private mastodon instance. Very nice to see!
  • renewiltord 11 hours ago
    I've always wanted this but not enough to build it. I wonder if I can integrate this with my Monica instance. Thank you! I'm going to try it.
    • mholt 11 hours ago
      Will be curious how you use it. I plan to integrate local LLM at some point but it’s still nebulous in my head.
  • cwmoore 6 hours ago
    Congratulations, good work, and good luck, "TickTock" might be an appropriate name.
  • RLAIF 10 hours ago
    [dead]