Trying to rebuild my website (php mvc built a decade ago) using Django. I want to be able to update any page content, upload and display images, have multiple blog instances. I do a lot of django-cms by day, but it's too much for a small personal website, so I started to create a (tiny, foss) CMS based on Django, django-prose-editor for the content, and some new apps (for now, Page & Blog).
The site isn't even online, but for now I'm starting to think about the next steps (seo-related things to implement, generalize app functions to handle not only blog but other (hypothetical) apps as well, improve code quality and repo readability, separate apps from the website so anyone can add them to their django website if they want to). It's a lot of work for something no one will ever use, but I must at least try to make it clean and discoverable :)
I, too, am selfhosting some projects on an old computer. And the fact that you can "hear internet" (with the fans going on) is really cool (unless you're trying to sleep while being scrapped).
I am referring to the convenience of being able to download it from the store and start using it immediately. If it were as effortless as I described, they would reach a much larger number of users
> If it were as effortless as I described, they would reach a much larger number of users
Almost certainly not. If you need this kind of tool, you'll either self-host it, use the hosted version or use Figma. There are no comparable offline-only alternatives. What users are they using exactly?
Yeah the AI solve a problem created by the company that made the AI because their algorithms are biased to display websites containing content written for them instead of content written for humans :/
> Personal experience however shows me that when I look at a recipe site I will first have to skip through the entire backstory to the recipe and then try to parse it inbetween annoying ads in a bloated wordpress page
That's when money comes into view. People were putting time and effort to offer something for free, then some companies told them they could actually earn money from their content. So they put on ads because who don't like some money for already-done work?
Then the same companies told them that they will make less money, and if they wanted to still earn the same amount as before, they will need to put more ads, and to have more visits (so invest heavily in seo).
Those people had already organized themselves (or stopped updating their websites), and had created companies to handle money generated from their websites. In order to keep the companies sustainable, they needed to add more ads on the websites.
Then some people thought that maybe they could buy the companies making the recipes website, and put a bunch more ads to earn even more money.
I think you're thinking about those websites owned by big companies whose only goal is to make money, but author is writing about real websites made by real people who don't show ads on websites they made because they care about their visitors, and not about making money.
Semi related, but a decent search engine like Kagi has been a dramatically better experience than "searching" with an LLM. The web is full of corporate interests now, but you can filter that out and still get a pretty good experience.
The thing is you can’t regulate word of mouth. It just pushes the money underground, where it can’t be taxed. People will still be paid to promote things, they’ll just pass it off as their own opinion, and it’ll be more insidious. Like it or not, at least advertising now often is clearly advertising. Not always, but often.
Or just let this LLM mania run to its conclusion, and we'll end up with two webs, one for profit for AI by AI and one where people put their shit for themselves (and don't really care what others think about it, or if they remix it, or ...).
There are already a lot of initiatives following this logic (like the small web movement, the indieweb, gemini/gopher protocols...), but the problem here is that people are using the web, not those projects. Even the fediverse is growing slowly, while it's using the web.
Sounds like that could be a fun idea for a new search engine/search engine function, only show results of websites without ads/and or paywalls. Sounds like a really run way to experience the passion part of the internet. Could be hard to implement as I would guess with any level of popularity it would quickly end up with people trying to turn such sites into sales funnels.
> Note: On some browsers, like Chrome, using Speech Recognition on a web page involves a server-based recognition engine. Your audio is sent to a web service for recognition processing, so it won't work offline.
We generate pdf files using weasyprint (convert html+css into cool pdf files), I think tools like this are very valuable and practical for building higher-level pdf-generators tools.
Yep, in-house PDF generators should be some sort of good middle ground, but I dunno if this 'weasyprint' is open source, is _lean_ open source? (no c++, java, etc).
When dealing with an ultra-complex file format which cannot be dodged, usually a good way to deal with it is to only use a very simple but coherent subset and enforce this usage with validation tools.
For instance, the web, noscript/basic (x)html (or you are jailed in the 2.5 web engines of the whatng cartel).
With PDF, I dunno much of the format (since I did not manage to download easily the specs), but when I have to print some text, I have a very small PDF generator for that (written ~25 years ago, so no utf-8 for me).
But what's important: such attempt must be sided with re-assessing the pertinence of the usage of the information systems, and yes, it will annoying and much less comfy and that MUST be acknowledged before even trying.
And big tech is not the only one trying hard to do vendor and developer lock-in.
> usually a good way to deal with it is to only use a very simple but coherent subset and enforce this usage with validation tools
You’re right, that’s exactly what we do. We support a growing subset of HTML and CSS that’s documented. We also use the W3C testing suite for HTML/CSS, and PDF validators, on top of custom unit tests.
> And big tech is not the only one trying hard to do vendor and developer lock-in.
We "only" follow open specifications and refuse vendor-specific features to avoid lock-ins (equivalent closed-source tools love that). And we even love the other open-source "concurrents": ♥ to Paged.js and Vivliostyle, try them, they’re great too!
"Open" is not enough anymore: it also has to be lean, stable in time, and able to do a good enough _pertinent_ (can be very subjective) job (and in the case of software, that includes the SDK, for instance if some c++ or similar are around, it should be excluded de-facto for obvious reasons).
It is _EXTREMELY_ hard to justify an honnest and permanent income writing software... REALLY HARD.
You can learn more about weasyprint on their website (https://weasyprint.org/ ). It's an open source Python package that can be launched using cli or from Python code.
It uses pypdf, which is "pydyf is a low-level PDF generator written in Python and based on PDF specification 1.7" (from their README at https://github.com/CourtBouillon/pydyf ).
Compile a minimal python interpreter with tinycc &| cproc &| scc, run this pydyf and you should be good to go :)
Hopefully, its API a C API bridge for interop.
But pydyf pretends to go up to PDF 1.7: this is kind of arrogant due to the file format complexity.
That's why such tools are not enough: what's important is to evaluate and to assess a subset of the PDF format, that to reduce significantly the technical cost of ownership and exit cost, and maybe use such tools to write also validation tools in order to enforce the usage of that subset of PDF.
Very often, complex file formats (open or not) end up being generated and consumed by one program.
A warning: big tech and its minions will fight super hard everything that is simple, stable in table and does a good enough job (like noscript/basic (x)html for nearly all online services as they were working a few years back).
The site isn't even online, but for now I'm starting to think about the next steps (seo-related things to implement, generalize app functions to handle not only blog but other (hypothetical) apps as well, improve code quality and repo readability, separate apps from the website so anyone can add them to their django website if they want to). It's a lot of work for something no one will ever use, but I must at least try to make it clean and discoverable :)
reply