sudneo

joined 2 years ago
[–] sudneo 1 points 1 year ago (4 children)

The manifesto is actually a future vision. And again, you are interpreting it in your own way.

At the same time, you are completely ignoring:

  • what the product already does
  • the features they actually invested to build
  • their documentation in which they stress and emphasize on privacy as a core value
  • their privacy policy in which they legally bind themselves to such commitment.

Because obviously who cares of facts, right? You have your own interpretation of a sentence which starts with "in the future we will have" and that counts more than anything.

Also, can you please share to me the quote where I say that I need to blindly trust the privacy policy? Thanks.

Because I remember to have said in various comments that the privacy policy is a legally binding document, and that I can make a report to a data protection authority if I suspect they are violating them, so that they will be audited. Also, guess what! The manifesto is not a legally binding document that they need to respond of, the privacy policy is. Nobody can hold them accountable if "in the future there will not be" all that stuff that are mentioned in the manifesto, but they are accountable already today for what they put in the privacy policy.

Do you see the difference?

[–] sudneo 1 points 1 year ago (6 children)

The “lens” feature isn’t mentioned in either Kagi manifesto.

So? It exists, unlike the vision in the manifesto. Since the manifesto can be interpreted in many ways (despite what you might claim), I think this feature can be helpful to show the Kagi intentions, since they invested work into it no? They could have build data collection and automated ranking based on your clicks, they didn't.

People just submitted it. I don’t know why. They “trust me”. Dumb fucks.

Not sure what the argument is. The fact that people voluntary give data (for completely different reasons that do not benefit those users directly, but under the implicit blackmail to use the service)? I have no objections anyway against Facebook collecting the data that users submit voluntarily and that is disclosed by the policy. The problem is in the data inferred, in the behavioral data collected, which are much more sneaky, and in those collected about non users (shadow profiles through the pixel etc.). You putting Facebook and an imaginary future Kagi in the same pot, in my opinion, is completely out of place.

[–] sudneo 1 points 1 year ago* (last edited 1 year ago) (8 children)

It’s pretty clear that you only draw your conclusions from a predetermined trust in Kagi, a brand loyalty.

As I said before, I also draw this conclusion based on the direction that they have currently taken. Like the features that actually exist right now, you know. You started this whole thing about dystopian future when talking about lenses, a feature in which the user chooses to uprank/downrank websites based on their voluntary decision. I am specifically telling that this has been the general attitude, providing tools so that users can customize stuff, and therefore I am looking at that vision with this additional element in mind. You instead use only your own interpretation of that manifesto.

Kagi Corp is good, so feeding data to it is done in a good way, but Facebook Corp is bad so feeding data to it is done in a bad way.

You are just throwing the cards up. If you can't see the difference between me having the ability to submit data, when I want, what I want and Facebook collecting data, there are only two options: you don't understand how this works, or you are in bad faith. Which one it is?

[–] sudneo 1 points 1 year ago (10 children)

I’ve been quoting the Kagi Corp manifesto.

Yes, but you have drawn conclusions that are not in the quotes.

Let me quote:

But there will also be search companions with different abilities offered at different price points. Depending on your budget and tolerance, you will be able to buy beginner, intermediate, or expert AIs. They’ll come with character traits like tact and wit or certain pedigrees, interests, and even adjustable bias. You could customize an AI to be conservative or liberal, sweet or sassy!

In the future, instead of everyone sharing the same search engine, you’ll have your completely individual, personalized Mike or Julia or Jarvis - the AI. Instead of being scared to share information with it, you will volunteer your data, knowing its incentives align with yours. The more you tell your assistant, the better it can help you, so when you ask it to recommend a good restaurant nearby, it’ll provide options based on what you like to eat and how far you want to drive. Ask it for a good coffee maker, and it’ll recommend choices within your budget from your favorite brands with only your best interests in mind. The search will be personal and contextual and excitingly so!

There is nothing here that says "we will collect information and build the thing for you". The message seems pretty clearly what I am claiming instead: "You tell the AI what it wants". Even if we take this as "something that is going to happen" (which is not necessarily), it clearly talks about tools to which we can input data, not tools that collect data. The difference is substantial, because data collection (a-la facebook) is a passive activity that is built-in into the functionality of the tool (which I can't use it without). Providing data to have functionalities that you want is a voluntary act that you as a user can do when you want and only for the category of data that you want, and does not preclude your use of the service (in fact, if you pay for a service and don't even use the features, it's a net positive for the company if that's how they make money!).

even accusing eyewitnesses of the CEO’s bad behavior of being liars.

What I witnessed is the ranting of a person in bad faith. You are giving credit to it simply because it fits your preconception. I criticized it based on elements within their own arguments, and concluded that for me that's not believable. If that's your only proof of "bad behavior" and that's enough for you, good for you.

What you say is bad for Facebook, is what Kagi Corp wants to do.

Let me reiterate on the above:

you will volunteer your data, knowing its incentives align with yours

Now, let's be clear because I have absolutely no intention to spending my evening repeating the same argument. Do you see the difference between the following:

  • I use a service to connect with people, share thoughts, read thoughts from others, and the service passively collects data about me so that it can serve me content that helps the company behind it maximizing their profits, and
  • I use a service that I can customize and provide data to in order to customize what I see and what is displayed to me, which has no financial incentive to do anything else with that data because I - the user - am the paying customer.

?

If you don't, and you don't see the difference between the two scenarios above, there is no point for me to continue this conversation, we fundamentally disagree. If you do see the difference, then you have to appreciate that the nature of the data collection moves the agency from the company to the user, and a different system of incentive in place creates an environment in which the company doesn't have to screw you over in order to earn money.

[–] sudneo 1 points 1 year ago (12 children)

… Because based on their manifesto, that’s exactly what Kagi wants to do with you as a search engine; show you the things it thinks you want to see.

no, based on your interpretation of the manifesto. I already mentioned that the direction that kagi has taken so far is to give the user the option to customize the tools they use. So it's not kagi that shows you the thing you want to see, but you, who tell kagi the things who want to see. I imagine a future where you can tune the AI to be your personal assistance, not the company.

Every giant corporation has a privacy policy

It is not having a policy that matters, obviously, it's what inside it that does. Facebook privacy policy is exactly what you would expect, in fact.

[–] sudneo 1 points 1 year ago (14 children)

It’s still data given to them, no scare quotes needed.

It is if you decide to give it to them. If it's a voluntary feature and not pure data collection, that's the difference. Which means if you don't want to take the risk, you don't provide that data. I am sure you understand the difference between this and the data collection as a necessary condition to provide the service.

And if that data includes your political alignment, like they say in their manifesto, a data breach would be catastrophic.

Which means you will simply decide not to use that feature and not give them that data?

And even if there isn’t one, using their manifesto to promise a dystopia where you are nestled in a political echo chamber sounds like a nightmare

It depends, really. When you choose which articles and newspapers you consider reputable, you consider that an echo chamber? I don't. This is different from using profiling and data collection to provide you, without your knowledge or input, with content that matches your preference. Curating the content that I want to find online is different from Meta pushing only posts that statistically resonate with me based on the behavioral analysis they have done on top of the data collected, all behind the scenes. I don't see where the dystopia is if I can curate my own content through tools. This is very different from megacorps curating our content for their own profit.

[–] sudneo 3 points 1 year ago

They don't, but a company built on that premise (private search) that does otherwise would be playing with fire. It caters to users that specifically look for that. I would quit in an instant if that would be the case, for example.

Seriously though even if they don’t track you an adversary could compromise them

This is true about pretty much anything. Unless you host and write the code yourself, this is a risk. It is a risk with searXNG (malicious instance, malicious PR/code change that gets approved etc.), with email providers, with DNS providers, etc.

What solution you propose to this, that can actually scale?

[–] sudneo 3 points 1 year ago (16 children)

In reality I did not read anywhere that they intend to create a profile on you. What I read is some fuzzy idea about a future in which AIs could be customised to the individual level. So far, Kagi's attitude has been to offer features to do such customisations, rather than doing them on behalf of users, so I don't see why you are reading that and jumping to the conclusion that they want to build a profile on you, rather than giving you the tools to create that profile. It's still "data" given to them, but it's a voluntary action which is much different from data collection in the negative sense we all mean it.

[–] sudneo 14 points 1 year ago (2 children)

He has not been sentenced already, I hope you know that. I hope you also know the effort that he and his team made to have the trial been done where he was de-facto prisoner, but also the completely lack of flexibility from those who wanted him to simply step out of the embassy to arrest and extradite him.

The timeline and the events are very well narrated in Stefania Maurizi's book. It's almost gross how much the rape accusations have been used to try to get to him and how poorly both British and Swedish authorities behaved, probably obeying to the US (colonial power much).

[–] sudneo 1 points 1 year ago (1 children)

OK guarantee was too strong of a word, I meant more like "assurance" or "elements to believe".

Either way, my point stand: you did not audit the code you are running, even if open source (let's be honest). I am a selfhoster myself and I don't do either.

You are simply trusting the software author and contributors not to screw you up, and in general, you are right. And that's because people are assholes for a gain, usually, and because there is a chance that someone else might found out the bad code in the project (far from a guarantee). That's why I quoted both the policy and the business model for kagi not to screw me over. Not only it would be illegal, but would also be completely devastating for their business if they were to be caught.

But yeah, generally hosting yourself, looking at the code, building controls around the code (like namespaces, network policies, DNS filtering) is a stronger guarantee that no funny business is going on compared to a legal compliance and I agree. That said, despite being a selfhoster myself, I do have a problem with the open source ecosystem and the inherent dependency on free labour, so I understand the idea of proprietary code. Ultimately this is what allowed kagi to build features that make kagi much more powerful than searXNG for example.

[–] sudneo 1 points 1 year ago (3 children)

Sure, but if you are considering a malicious party in the kagi case, your steps don't help. What you propose can totally work if you are considering good faith parties.

In other words: assume you use searXNG. If you now want to consider a malicious party running an instance, what guarantees do you have? The source code is useless, as the instance owner could have modified it. I don't see a privacy policy for example on https://searxng.site/searxng/ and I don't see any infrastructure audit that confirms they are running an unmodified version of the code, which - let's assume - has been verified to respect your privacy.

How do you trust them?

I am curious, what do you use as your search engine?

[–] sudneo 1 points 1 year ago (5 children)

I am not understanding something then.

The basics in this case are a legally binding document saying they don't do x and y. Them doing x or y means that they would be doing something illegal, and they are being intentionally deceptive (because they say they don't do it).

So, the way I see it, the risk you are trying to mitigate it is a company which actively tries to deceive you. I completely agree that this can happen, but I think this is quite rare and unfortunately a problem with everything, that does not have a solution generally (or to be more specific, that what you consider basics - open source code and an audit - do not mitigate).

Other than that, I consider a legally binding privacy policy a much stronger "basic" compared to open source code which is much harder to review and to keep track of changes.

Again, I get your point and whatever your threshold of trust is, that's up to you, but I disagree with the weight of what you consider "the basics" when it comes to privacy guarantees to build trust. And I believe that in your risk mapping your mitigations do not match properly with the threat actors.

view more: ‹ prev next ›