I had the same problem on our home server.. I just stopped the git forge due to lack of time.
For what it's worth, most requests kept coming in for ~4 days after -everything- returned plain 404 errors. millions. And there's still some now weeks later...
I get a new fingerprint id everytime I refresh the page (firefox, linux) -- so that might be sampling a tiny bit too much.
audio and canvas fingerprint are constant though so it's probably plenty enough...
Also convinced they're observing a correlation with this rather than something age specific.
It's like exercise, it's really impressive how people who stay active in their late years can still be very fit way into their 80s or 90s, yet if one stops trying it just crumbles away.
Unfortunately I get the same kind of garbage around closing curly braces / closing parenthesis / dots with this magick filter... It seems to do slightly better with an extra `-resize 400%`, but still very far from as good as what you're getting (to be fair the monochrome filter is not pretty (bleeding) when inspecting the result).
I wonder what's different?
( ImageMagick-7.1.1.47-1.fc42.x86_64 and tesseract-5.5.0-5.fc42.x86_64 here, no config, langpack(s) also from the distro)
Yeah, he definitely should have used a code block for the examples. To the author, if you are trying to preserve code formatting and syntax highlighting, there are JS packages that will take care of all of that and produce clean, copyable, well-rendered, accessible code formatting for you.
My only complain about niri is that after a few weeks without reboot I end up with ~500 terms open, as I often open a new shell to check something, get distracted, and forget about it as it scrolls out of the view...
(I usually notice at the 400-500 mark because this machine starts swapping noticeably, and closing it all is a chore that usually ends in pkill without checking...)
Wouldn't you have the same problem with changing workspaces? Sounds like you can't keep track of anything not currently present on the screen, which before the overview was a lot harder to deal with. One thing that could help is to create a "temporary terminal" keybinding to launch in floating mode so you'll never forget to close it. Or create a focus-or-launch bind that switches to an existing terminal (tools like Nirius can help minimize scripting).
The other thing that may help is adjusting your struts so you can see that windows exist to the left or the right. More of general workflow tip than one related to just terminals.
Yes and no; the difference with workspace is that I was limited to 0-9 with my old wm, so at some point I'd just run out of space and had to close some windows.
(well, that, and X11 is apparently limited to 256 clients by default and I never changed that; but I rarely hit that limit :P)
I do have some struts on the side, but I'm basically always juggling with at least 4 or 5 tasks so I always have things open; (I'm not using any right now but I do like the "quake terminals" temporary term styles... But for the same reason it's not always appropriate -- if I didn't close the term, it's because I wasn't done with it and mean to get back to it...)
I started using niri before the overview, I think that could help if I get used to it.
But better than overview, what I'd want is something always visible like some horizontal scrollbar indicator to remind me there's e.g. more than 3 windows hidden or something.
That might be possible to do with waybar and a bit of glue parsing the windows list...
I don't use niri but I worked around this problem (feature?) by creating a bash script that by default checks if a terminal is already open and if so, brings it into focus. Then I attach it to my default shortcut to open terminal and then create one more shortcut that opens a new terminal every time. So now, depending on which shortcut is pressed, I can either keep reusing the existing terminal or open a new one. I'm sure we can have a script that can do more fancy logic like allowing new terminals upto a given number and after that just bring the latest one into focus. Plenty of possibilities.
I have a script that allows searching for windows based on title; so e.g. if I know I had a shell open in directory X I could search for that and jump to it...
But in practice I quickly have 5+ shells in a directory once I start working on something and at this point my script doesn't let me differentiate between these easily enough to be useful.
Hmm, perhaps that could be made more interactive and allow cycling through these without closing the search overlay... I'll give that a try! :)
If a shell has been sitting at the prompt for 7 days with no input, it's probably OK for it to close. I'm sure it'll be wrong sometimes, but it seems less bad than pkill en masse.
and then you reboot the machine with full session restore working (whenever that's available for wayland) and get 500 terminal windows opening at the same time :)
This is what we all want, to be in control. I am OK to make a mess sometimes as long as it is my mess not because of magical system. So yeah I would be OK.
Some kind of alert task that would tell you you have window open that you didn't visit in days would probably also be useful to your point.
I am not against it, just I can see positives in this. This is like tmux without tmux.
Oh, i actually agree with you, I was more concerned about the suprise abd amusement of seeing that amount of windows pop up out of nowhere. Though, I do wonder how well a linux computer will deal with so much forking, if it freezes the machine for a minute it would be rather bad, but maybe linux and wayland are mature enough to not freeze and spawn all that in like a second or so
Which extracts to this .config file (looks like lua code, that creates a secret from PBKDF2 of... what? I couldn't find where secrets would come from here, but that repo obviously misses the interesting bindings; from the how it works link it looks like they're just hashing the SN to generate a pseudorandom key but I don't see why you couldn't just generate a key for neighboring devices by just faking the SN then...)
local maxHash=pcall(function() ba.crypto.hash("sha512") end) and "sha512" or "sha256"
local sfmt,jencode,jdecode,symmetric,PBKDF2,keyparams,sign,jwtsign,createkey,createcsr,sharkcert=
string.format,ba.json.encode,ba.json.decode,ba.crypto.symmetric,ba.crypto.PBKDF2,ba.crypto.keyparams,
ba.crypto.sign,require"jwt".sign,ba.create.key,ba.create.csr,ba.create.sharkcert
local function setuser(ju,db,name,pwd)
if pwd then
if type(pwd) == "string" then
pwd={pwd=pwd,roles={}}
end
db[name]=pwd
else
db[name]=nil
end
local ok,err=ju:set(db)
if not ok then error(err,3) end
end
local function tpm(gpkey,upkey)
local keys={}
local function tpmGetKey(kname)
local key=keys[kname]
if not key then error(sfmt("ECC key %s not found",tostring(kname)),3) end
return key
end
local function tpmSign(h,kname,op) return sign(h,tpmGetKey(kname),op) end
local function tpmJwtsign(p,kname,op) return jwtsign(p,function(h) return sign(h,tpmGetKey(kname)) end,op) end
local function tpmKeyparams(kname) return keyparams(tpmGetKey(kname)) end
local function tpmCreatecsr(kname,...) return createcsr(tpmGetKey(kname),...) end
local function tpmCreatekey(kname,op)
if keys[kname] then error(sfmt("ECC key %s exists",kname),2) end
op = op or {}
if op.key and op.key ~= "ecc" then error("TPM can only create ECC keys",2) end
local newOp={}
for k,v in pairs(op) do newOp[k]=v end
newOp.rnd=PBKDF2(maxHash,"@#"..kname,upkey,5,1024)
local key=createkey(newOp)
keys[kname]=key
return true
end
local function tpmHaskey(kname) return keys[kname] and true or false end
local function tpmSharkcert(kname,certdata) return sharkcert(certdata,tpmGetKey(kname)) end
require"acme/engine".setTPM{jwtsign=tpmJwtsign,keyparams=tpmKeyparams,createcsr=tpmCreatecsr,createkey=tpmCreatekey,haskey=tpmHaskey}
local t={}
function t.haskey(k) return tpmHaskey(k) end
function t.createkey(k,...) return tpmCreatekey(k,...) end
function t.createcsr(k,...) return tpmCreatecsr(k,...) end
function t.sign(h,k,o) return tpmSign(h,k,o) end
function t.jwtsign(k,...) return tpmJwtsign(k,...) end
function t.keyparams(k,...) return tpmKeyparams(k,...) end
function t.sharkcert(k,...) return tpmSharkcert(k,...) end
function t.globalkey(n,l) return PBKDF2(maxHash,n,gpkey,5,l) end
function t.uniquekey(n,l) return PBKDF2(maxHash,n,upkey,5,l) end
function t.jsonuser(k,global)
k=PBKDF2("sha256","@#"..k,global and gpkey or upkey,6,1)
local function enc(db)
local iv=ba.rndbs(12)
local gcmEnc=symmetric("GCM",k,iv)
local cipher,tag=gcmEnc:encrypt(jencode(db),"PKCS7")
return iv..tag..cipher
end
local function dec(encdb)
if encdb and #encdb > 30 then
local iv=encdb:sub(1,12)
local tag=encdb:sub(13,28)
local gcmDec=symmetric("GCM",k,iv)
local db
pcall(function() db=jdecode(gcmDec:decrypt(encdb:sub(29,-1),tag,"PKCS7")) end)
if db then return db end
end
return nil,"Data corrupt"
end
local ju,db=ba.create.jsonuser(),{}
return {
users=function() local x={} for u in pairs(db) do table.insert(x,u) end return x end,
setuser=function(name,pwd) setuser(ju,db,name,pwd) return enc(db) end,
setdb=function(encdb) local d,err,ok=dec(encdb) if d then ok,err=ju:set(d) if ok then db=d return ok end end return nil,err end,
getauth=function() return ju end
}
end
ba.tpm=t
end
local klist={}
return function(x)
if true == x then
local hf=ba.crypto.hash(maxHash)
for _,k in ipairs(klist) do hf(k) end
tpm(ba.crypto.hash(maxHash)(klist[1])(true),hf(true))
klist=nil
return
end
table.insert(klist,x)
end
They just can't make an official release with it, because they can't publish the patch sources (embargoed) and their releases being open-source must match what they published...
I don't run AI, but anything I don't fully trust 200% runs without access to my home, and if it doesn't really need internet without internet either.
bwrap commands can be a mouthful so I suggest making a script for things you commonly do, e.g. "run with this directory as $HOME" or "run with empty home, keeping just this directory as is", with a couple of flags to enable networking or wayland/sound... Once you have this there really is no benefit to not sandboxing.
It's probably not as good as running in a full VM, but it's good enough for me.
For what it's worth, most requests kept coming in for ~4 days after -everything- returned plain 404 errors. millions. And there's still some now weeks later...