this post was submitted on 03 Jul 2023
2733 points (98.3% liked)

Technology

59103 readers
5282 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Google has reportedly removed much of Twitter's links from its search results after the social network's owner Elon Musk announced reading tweets would be limited.

Search Engine Roundtable found that Google had removed 52% of Twitter links since the crackdown began last week. Twitter now blocks users who are not logged in and sets limits on reading tweets.

According to Barry Schwartz, Google reported 471 million Twitter URLs as of Friday. But by Monday morning, that number had plummeted to 227 million.

"For normal indexing of these Twitter URLs, it seems like these tweets are dropping out of the sky," Schwartz wrote.

Platformer reported last month that Twitter refused to pay its bill for Google Cloud services.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 74 points 1 year ago (26 children)

I feel like Google is going to have to find a way to effectively index federated content at some point. The only way to really get human information is from sites like Reddit and Twitter. And both of those platforms seem to be dedicated to completely imploding at the moment.

[–] [email protected] 25 points 1 year ago (2 children)

There's nothing about the content being federated that makes it hard or impossible to index. Each instance is just a website with a public webpage that a bot can read. That all a search engine needs to index it. The worst case scenario is the bot will find the same content on multiple instances.

I did read that the website is loaded entirely through JavaScript and that maybe the Google bot doesn't execute JavaScript so can't see the text. I don't know if that's still a problem in 2023, though.

This article says it's not a problem, but I didn't read past the tl;dr, so maybe there's a caveat. Like maybe it has to use a popular framework like React or something to work.

https://searchengineland.com/tested-googlebot-crawls-javascript-heres-learned-220157

[–] [email protected] 2 points 1 year ago

Rendering with JS definitely makes a difference, it's part of the reason SSR is such a big deal for SEO.

load more comments (1 replies)
load more comments (24 replies)