Yes, I've also been thinking about this a lot, and reached similar conclusions.
One of the main issues I can't think my way around is privacy. I don't want my full and true trust network to be publicly available, although I might provide a more detailed map to my trusted peers. I might also "lie" about a given weight depending on who asks, for diplomatic purposes (perhaps you have a close friend who also happens to be rather gullible).
A working system is going to take a lot of effort on behalf of each user to correctly and accurately annotate their own trust graphs, on an ongoing basis - perhaps this just too impractical.
I performed a slightly unethical experiment many years ago, In which I created an entirely fake Facebook account (back when people actually used Facebook), and slowly sent out friend requests to personal acquaintances. All it takes is one or two to get started, and every subsequent person sees "n mutual friends" and is more likely to accept the random request. It snowballed from there, and eventually I had "infiltrated" a non-trivial portion of my own social network with an entirely fictional persona.
(Ethics note: I only sent friend requests to people I was already friends with on my real account - so I wasn't obtaining any new private information I wouldn't otherwise have had access to - and I didn't perform any interactions beyond sending friend requests)
Any kind of trust network is going to need to deal with this sort of infiltration - and I'm not sure how.
And another thing - trust is bought and sold all the time. Social media influencers sell a small fragment of their trust level every time they do a paid endorsement. If there's some kind of explicit trust network, people will pay others to obtain a higher trust level. Is there anything we can do about that?
Usability, bootstrapping and privacy - the biggest problems as I see it ordered by difficulty (yup).
Privacy - make queries require some trust from you. Your software may decide what is allowed based on your needs, e.g. low trust have lots of limiting and global limits while letting close friends ask as much as they want, you could also take into account who are you replying to. Some other privacy issues could perhaps be solved with having a wallet of identities.
Usability - very hard, I think it must be weighted and I think it should be weighted in a way that's not algorithmic. It should be you explicit trust to somebody - that's a rock foundation that if you tell me you trust some person 80% I know it's you saying that and not what some algorithm computed. We have a really good idea about our social network trust in hour heads and we keep updating it, but it seems hard to transfer to a device or even verbalize. Assigning weights to each trusted person is just too much to ask even from the most engaged users. Some app could perhaps ask you whether you ask more person A or B from time to time after asking for a short list of your most trusted peers or something like that, but it is a hard problem which I see no clear solution to yet. Adjusting is also not obvious, if you got high trust result to say some mechanic and he turned out to be terrible you'd like to see which person did it come from and perhaps lower trust your trust to them.
Because they could be compromised. That's another problem. If it becomes what it can, billions will be spent to try to affect results. People can be compromised and have no idea. I think FB and such faced such problem already but it is easier to counter as a centralized entity.
Social consequences would be huge but it does not fix the world completely. E.g. there still would be stupid hubs. That is, many people are going to trust some celebrity and that celebrity is going to sell trust abuse power etc. But that's those people choice.
In short it just lets you query trust, but the way people assign it seems more like an education system problem. Part of my how much I trust somebody is how good is he at assigning trust. Some people I dont trust much not because I think they are malicious but because I know they can be influenced easily or are not careful about assesing their trust.
In my experience despite the Internet and all that comes with it, asking trusted people remains the best way to learn about many things. They just point me to something and I know its worth my time.
Would be nice if it could scale and not waste time on both sides.
> We have a really good idea about our social network trust in hour heads and we keep updating it, but it seems hard to transfer to a device or even verbalize.
yup yup yup. I hadn't thought about the A/B comparison approach, I like that.
I also think you need to have separate weights for "how much I trust this person" and "how much I trust this person's weights". Going back to my "gullible friend" example - I might trust their first-hand stories perfectly well, but would trust them much less than my other peers when it comes to relaying second-hand information.
I disagree about separate weights. I don't trust judgement of somebody gullible.
But it's related to the problem of what is trust. E.g. I may trust somebody to create a secure software, but I wouldn't trust her take care of my dog. So at the beginning I thought there should be dimensionality of trust. Currently I think it overly complicates things not to mention the problem of categories. So trust means whatever it means for you and people you trust and specific use cases can perhaps be covered with multiple identities instead.
Since you've spent some time on the problem if you have any other insights/ideas/problems I'd be delighted to hear them.
There's a lot of interesting dynamics. It should influence politics a lot. And at the beginning I thought maybe a crime scene too making it harder to infiltrate gangs, but then I realized that as a criminal I wouldn't dare make a list of my accomplices on a device which can end up in hands of law enforcement.
One of the main issues I can't think my way around is privacy. I don't want my full and true trust network to be publicly available, although I might provide a more detailed map to my trusted peers. I might also "lie" about a given weight depending on who asks, for diplomatic purposes (perhaps you have a close friend who also happens to be rather gullible).
A working system is going to take a lot of effort on behalf of each user to correctly and accurately annotate their own trust graphs, on an ongoing basis - perhaps this just too impractical.
I performed a slightly unethical experiment many years ago, In which I created an entirely fake Facebook account (back when people actually used Facebook), and slowly sent out friend requests to personal acquaintances. All it takes is one or two to get started, and every subsequent person sees "n mutual friends" and is more likely to accept the random request. It snowballed from there, and eventually I had "infiltrated" a non-trivial portion of my own social network with an entirely fictional persona.
(Ethics note: I only sent friend requests to people I was already friends with on my real account - so I wasn't obtaining any new private information I wouldn't otherwise have had access to - and I didn't perform any interactions beyond sending friend requests)
Any kind of trust network is going to need to deal with this sort of infiltration - and I'm not sure how.
And another thing - trust is bought and sold all the time. Social media influencers sell a small fragment of their trust level every time they do a paid endorsement. If there's some kind of explicit trust network, people will pay others to obtain a higher trust level. Is there anything we can do about that?