this post was submitted on 28 May 2024
884 points (99.8% liked)

Technology

59989 readers
2384 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

A purported leak of 2,500 pages of internal documentation from Google sheds light on how Search, the most powerful arbiter of the internet, operates.

The leaked documents touch on topics like what kind of data Google collects and uses, which sites Google elevates for sensitive topics like elections, how Google handles small websites, and more. Some information in the documents appears to be in conflict with public statements by Google representatives, according to Fishkin and King.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 98 points 6 months ago (4 children)

Can't wait for selfhosted web search to become better.

[–] [email protected] 62 points 6 months ago (6 children)

You mean hosting your own crawler/indexer? That doesn't really sound like a thing you could do cost-effectively.

[–] [email protected] 62 points 6 months ago (1 children)

No problem we crowdsource the crawling torrent style.

We outsourced that to google for reasonnable performance reason. But they shit the bed so now there's no choice but to do it ourselves.

[–] bamfic 11 points 6 months ago (1 children)

ooh that might be an interesting app to run on veilid

[–] [email protected] 1 points 6 months ago (1 children)

What is that and how does it apply ?

[–] [email protected] 2 points 6 months ago

Veilid is a peer-to-peer network and application framework released by the Cult of the Dead Cow on August 11, 2023, at DEF CON 31.[1][2][3][4] Described by its authors as "like Tor, but for apps",[5] it is written in Rust, and runs on Linux, macOS, Windows, Android, iOS,[6] and in-browser WASM.[7] VeilidChat is a secure messaging application built on Veilid.[1][4]

Veilid borrows from both the Tor anonymising router and the InterPlanetary File System (IPFS), to offer encrypted and anonymous peer-to-peer connection using a 256-bit public key as the only visible ID. Even details such as IP addresses are hidden.[4]

Source: https://en.wikipedia.org/wiki/Veilid

[–] [email protected] 19 points 6 months ago (2 children)

Surprisingly, it's very doable, requires basic technical knowledge and relatively minimal computing resources (runs in the background on your computer).

https://yacy.net/ Github

I have tampermonkey script that sends yacy to crawl any websites that I visit, and it's keeping up relatively good index for personal use of the visited websites. Combine yacy with ~300gb of Kiwix databases, add searxng as a frontend and you have pretty strong self hosted search engine.

Of course you need to supplement your searches from other search engines, as yacy does not crawl the whole web, just what you tell it to.

I encourage anyone who's even slightly interested on this stuff to try Yacy, it's ancient piece of software, but it still works very well and is not an abandoned project yet!

--

I personally use Yacy mostly on private mode, but it does have the distributed network there as well. Yacy current freeworld status

[–] [email protected] 7 points 6 months ago

Yeah, I guess the P2P component sort of solves part of the issue I was imagining by distributing indexes and crawling. I was thinking that people were trying to run all of Google on a raspberry pi at home.

[–] Finadil 5 points 6 months ago (1 children)

This is interesting, have you had it index reddit? I'm just wondering how much storage space the database takes up.

[–] [email protected] 3 points 6 months ago

Hi!

Great question! I don't crawl reddit, but this applies to other large sites as well. reddit themselves they have at this very moment banned the ip range where I host my Yacy at (Hetzner). I just looked up from my index that I do have 257k pages indexed from reddit through teddit I used to run, this is from before reddit api-enshittification, going to delete those right now.

And the way how the crawling is done is you define crawling depth, which limits how much content is crawled from the site.

  • 0 crawling depth = only the page you send Yacy to, nothing more.
  • 1 crawling depth = all the links on the page you send Yacy to
  • 2 crawling depth = all links on the page you send Yacy to, and all links on the pages crawled..
  • 3 ...
  • n ...

... etc.

I have my tampermonkey scripts set to only crawling depth of 1 at the moment (Just set them to 2 actually, kinda curious how much more I will be crawling), I've manually crawled some local news sites as a curiosity at the beginning. And my database is currently relatively small, only around ~86.38 gigabytes according to Yacy. This stores aproximately 2.6 million documents in Yacy's Solr.

Yacy memory & disk usage. Yacy solr index size

--

Yacy has tons of options for crawling, so you can customize how much it crawls and even filter out overly large sites with maximum number of documents set when you send Yacy there.

Large picture of Yacy's interface for starting a crawl.

--

The tampermonkey script I've been talking about in these posts, it's very simple script: https://github.com/JeremyRand/YaCyIndexerGreasemonkey

Hit me up if you guys have more questions! I'm by no means an expert on Yacy, but I will do my best to answer.

[–] [email protected] 16 points 6 months ago

Right!

Before his company was able to block more of Microsoft's own tracking scripts, DuckDuckGo CEO and founder Gabriel Weinberg explained in a Reddit reply why firms like his weren't going the full DIY route:

“… [W]e source most of our traditional links and images privately from Bing … Really only two companies (Google and Microsoft) have a high-quality global web link index (because I believe it costs upwards of a billion dollars a year to do), and so literally every other global search engine needs to bootstrap with one or both of them to provide a mainstream search product. The same is true for maps btw -- only the biggest companies can similarly afford to put satellites up and send ground cars to take streetview pictures of every neighborhood.”

Ars

[–] warmaster 16 points 6 months ago (2 children)
[–] [email protected] 53 points 6 months ago (3 children)

Federated directories. We're going back to Yahoo like it's 1995

[–] [email protected] 32 points 6 months ago (2 children)
[–] AbidanYre 19 points 6 months ago

<under_construction.gif>

[–] [email protected] 5 points 6 months ago (1 children)

Uh...I know we're all just having fun here, but I need to be part of a webring again. If anyone is more than joking, I kinda need to know about it. Thanks.

[–] [email protected] 5 points 6 months ago (1 children)

there are tons of webring still going these days!

[–] [email protected] 4 points 6 months ago* (last edited 6 months ago) (1 children)

Seriously? Cool. I'm going to go do some research then. And maybe entirely change the purpose of my blog, just to fit into one...

[–] [email protected] 1 points 6 months ago

can you share a link to it if you're comfortable with that

[–] [email protected] 4 points 6 months ago (1 children)
[–] neblem 6 points 6 months ago (1 children)

Neocities is trying to be a modern reincarnation https://neocities.org/

[–] [email protected] 2 points 6 months ago

I mistook that as neopets

[–] [email protected] 1 points 6 months ago

Yahoo patiently plotting its return from Japan.

[–] [email protected] 7 points 6 months ago (1 children)

I'm so ready for something like this. I've cleaned up my bookmarks and been waiting for alternatives to search engines.

[–] [email protected] 7 points 6 months ago

You could use Common Crawl, it's run by a non profit

https://en.wikipedia.org/wiki/Common_Crawl

[–] Im_old 5 points 6 months ago

Look up the yacy repo in github

[–] [email protected] 17 points 6 months ago

How is that even supposed to work? These search engines need per definition massive databanks to search through. Either you need your own crawler and indexer which is more than just inefficient, or you are limited to a relatively short list of curated static results.

[–] [email protected] 12 points 6 months ago (1 children)

If they're taking tips from Google, why would they get better?

[–] [email protected] 32 points 6 months ago* (last edited 6 months ago) (1 children)

Google actually was good, so there's probably some good information in this documentation. If nothing else we can perhaps figure out what "went wrong."

Edit: I've been reading the blog post that appears to be the main person the leak was shared with and there's a lot of in-depth analysis being done there, but I'm not seeing a link to the actual documents. This is a huge article, though, I might be overlooking it.

[–] [email protected] 6 points 6 months ago

That was an interesting read. Thanks for linking to it.

[–] paraphrand 3 points 6 months ago (3 children)

What are the current contenders?

[–] [email protected] 13 points 6 months ago (2 children)

What it looks like beyond Google and Bing

It would be much harder to know what exists beyond "GBY" (Google, Bing, Yandex) and how it all works without the work of Rohan “Seirdy” Kumar. For three years, Kumar has been updating a heavily annotated list of search engines with their own indexes. It is 7,000 words, but only a portion of it deals with engines offering general indexing, in the English language. You can read Kumar's evaluation methodology for a better understanding of how he compared and assessed sites.

What stands out? Mojeek ("it's not bad… I'd live") and Stract ("a useful supplement to more major engines") are two of Kumar's favorites. Right Dao has "very fast, good results," in part because its crawler starts off from Wikipedia. Yep reaches farther out, showing results that link to and back from sites related to your query and also promises to share ad revenue with creators. All of them show promise, but you get the sense that they're a second car, or a third bicycle, rather than a primary transport.

There are far smaller-scoped engines in other sections of Kumar's post. If you're wondering where that one other search engine you've heard about is, it's probably in the "Semi-independent indexes" section, because it uses a GBY index when its own results are not strong enough. Here, you'll find cryptocurrency-friendly, controversy-courting-founder-having Brave, a few engines that either "resell" GBY results or stuff affiliate links into them, and "the most interesting entry," according to Kumar, Kagi.

Kagi requires an account and uses its own index, Teclis, in combination with Google, Bing, Yandex, Mojeek, and others, including, notably, Brave. Kagi's founder has strong opinions on the AI-based future of search and responding to harmful searches in ways that are not "scalable." How much of that does or does not bother you will vary, but it's worth noting that Kagi also suffers when the GBY triumvirate is restricted.

Ars Technica this week: Bing outage shows just how little competition Google search really has

The referenced search engine comparison by Rohan “Seirdy” Kumar

[–] [email protected] 5 points 6 months ago

can't emphasise too much that this piece is a very necessary read for anyone who wants to know about search; not just because it says good things about us, but because of the depth of research which has been put in here. Most times you encounter an article about indexes they are just taking whatever a (meta)search engine says about themselves, not even looking at privacy policies for "relationships with microsoft" etc. or doing any comparative work.

[–] [email protected] 2 points 6 months ago (1 children)

I've been using Kagi and really like it so far. It's not good for local stuff, but afaik only Google and Bing have the resources and userbase for things like maps and reviews. It's designed to be an ad-free 'premium' search engine and only earns revenue from users paying for membership.

[–] neblem 4 points 6 months ago

OpenStreetMap's platform is the only real way to compete against Google and Apple and it's why Microsoft even though it has Bing Maps, has licenced to them resources like satellite imagery for mapping. It's awesome in bigger population areas but there's still a lot to map in rural places outside the EU.

Review is harder. Right now the leading open platform afaik is Open Reviews (aka Mangrove Reviews) which has tie-ins to OSM projects like MapComplete. OsmAnd and OrganicMaps have open tickets to hook into that ecosystem. You're right about the userbase problem though, I think it (or a successor) needs AP federation to really take off. That being said there's several active non-Google nonfree alternatives like Yelp and TripAdvisor as well as niche sites for things like camping, parks, and schools.

[–] [email protected] 6 points 6 months ago (1 children)

the only one I know that isn't a proxy search is yacy

[–] [email protected] 4 points 6 months ago (1 children)

I was looking at it the other day unfortunatly its got quite poor results

[–] [email protected] 5 points 6 months ago

YaCy, Mwmbl, Alexandria, Stract, Marginalia to name a few.