The_Lemmington_Post

joined 4 months ago
MODERATOR OF
[–] [email protected] 12 points 1 month ago* (last edited 1 month ago) (1 children)

fopnu seems like the most similar alternative to emule that is actively maintained. Unlike soulseek it can download a file from multiple seedeers at the same time.

8
submitted 1 month ago* (last edited 1 month ago) by [email protected] to c/[email protected]
 

The Struggle of Economic Realities: A Call for Change

Introduction

In a world where economic disparities are widening, many individuals find themselves facing the harsh reality of financial struggle despite working hard and earning a decent wage. This article delves into the frustrations and challenges expressed by individuals grappling with unaffordable housing, stagnant wages, and a sense of disillusionment with the current economic system.

Living in the Shadow of Unaffordable Housing

The dream of owning a home or even renting a comfortable living space feels increasingly out of reach for many. Skyrocketing housing costs, coupled with stagnant wages, have created a situation where even basic accommodation is a financial burden. The discrepancy between past generations' ability to afford homes on modest incomes and the current struggle highlights systemic issues within the housing market.

  • Escalating housing costs outpacing wage growth
  • Impact of inflation and Federal Reserve policies on housing affordability
  • Generational disparities in housing affordability

The Illusion of Economic Mobility

Despite the rhetoric of the "American Dream" and the promise of upward mobility, many find themselves trapped in a cycle of financial insecurity. The narrative of pulling oneself up by the bootstraps is increasingly out of touch with the economic realities faced by millions who work tirelessly yet struggle to make ends meet.

  • Myth of economic mobility in contemporary society
  • Impact of generational wealth disparities on economic opportunities
  • Challenges of achieving financial stability despite hard work

The Role of Government and Corporations

Criticism extends beyond individual struggles to the structures and entities perpetuating economic inequality. Both government policies and corporate practices come under scrutiny for exacerbating rather than alleviating financial hardships.

  • Critique of government spending priorities and allocation of resources
  • Influence of corporate interests on public policy and economic decisions
  • Lack of accountability for entities responsible for economic disparities

Unity in the Face of Division

In the midst of economic turmoil, calls for unity and solidarity emerge as individuals recognize the need to stand together against systemic injustices. Divisive tactics, whether based on identity politics or ideological differences, only serve to further entrench societal divisions.

  • Importance of unity across diverse demographics in addressing economic inequality
  • Recognition of common struggles transcending racial, ethnic, and ideological lines
  • Rejection of divisive narratives perpetuated by political and corporate interests

Reimagining Economic Justice

Moving forward requires a collective reevaluation of societal values and priorities. A shift towards equitable economic policies and a commitment to addressing systemic injustices are essential steps towards creating a more just and inclusive society.

  • Advocacy for policies promoting economic justice and opportunity for all
  • Importance of civic engagement and grassroots activism in driving systemic change
  • Vision for a future where economic prosperity is accessible to every individual

Conclusion

The challenges outlined in this article underscore the urgent need for a fundamental reexamination of economic structures and priorities. From unaffordable housing to stagnant wages and systemic inequality, the status quo is no longer tenable. Only through collective action and a commitment to justice and equity can we hope to build a society where every individual has the opportunity to thrive.

46
66 years apart (discuss.online)
 
 

I read there were genetically modified mosquitoes that only bred males. Is that commercially available? What about that laser that zapped roaches, has it been improved to zap mosquitoes too? Maybe there is a guide somewhere on how to build a DIY Anti Mosquitoes Air Defense System?

[–] [email protected] 17 points 2 months ago (3 children)

26 TB where coming a few years ago, and yet I'm only recently seeing 20 TB drives.

 

In the near future, water wars will be fueled by big tech and AI's insatiable thirst for resources, hidden from the public eye. Data centers, powered by AI, consume immense amounts of water for cooling and electricity, rivaling entire nations in usage. Semiconductor plants and mining operations further strain water resources, exacerbating environmental and social tensions. As big tech relentlessly promotes AI adoption, the looming conflict over water resources becomes inevitable, with potential consequences overshadowing sci-fi fears of AI apocalypse. The narrative urges vigilance against billionaire influence shaping policy and public discourse, advocating for awareness and critical thinking.

 

Estimating the total watch hours for all YouTube videos under the entertainment genre is a challenging task due to the vast amount of content available on the platform and the dynamic nature of user engagement. However, I can provide you with an approximate estimate based on the available data and make some reasonable assumptions.

According to YouTube's own statistics, as of May 2019, people watch over a billion hours of video content on YouTube every day. While this number includes all genres of videos, it is safe to assume that a significant portion of this viewership is attributed to the entertainment genre, which encompasses various categories such as movies, TV shows, music videos, comedy, and more.

To estimate the total watch hours for the entertainment genre, we can make the following assumptions:

  1. Entertainment genre accounts for a significant portion of YouTube's viewership, let's assume 50% for this estimation.
  2. The daily watch hours have likely increased since 2019 due to the platform's growth and the rise in demand for online entertainment during the COVID-19 pandemic.

Based on these assumptions, we can calculate the total watch hours for the entertainment genre as follows:

Daily watch hours on YouTube (as of May 2019) = 1 billion hours Assuming entertainment genre accounts for 50% of viewership: Daily watch hours for entertainment genre = 1 billion x 0.5 = 500 million hours

Now, let's assume a conservative growth rate of 10% per year since 2019 to account for the increasing demand for online entertainment.

Daily watch hours for entertainment genre in 2024 = 500 million x (1.1)^5 = 805 million hours

To estimate the total watch hours for all YouTube videos under the entertainment genre, we can multiply the daily watch hours by the number of days in a year:

Total watch hours for entertainment genre in 2024 = 805 million hours x 365 days = 293.825 billion hours

It's important to note that this is a rough estimate based on assumptions and may vary significantly from the actual numbers. The estimation process is complicated by factors such as varying engagement levels across different entertainment categories, regional differences in content consumption, and the dynamic nature of user preferences.

 

Solving coding challenges on platforms like CodeWars, LeetCode, and Exercism can be useful for practicing problem-solving skills and getting familiar with algorithmic thinking. However, to better prepare for large projects, it's essential to complement these exercises with more comprehensive and practical approaches. Here are some ways to practice coding that can better prepare you for large projects:

  1. Build Complete Applications: Instead of focusing solely on isolated challenges, try building complete applications from scratch. This could be a web application, a mobile app, a desktop application, or any other type of software. Building a complete application will expose you to various aspects of software development, such as project planning, architecture design, code organization, testing, deployment, and maintenance.

  2. Contribute to Open-Source Projects: Contributing to open-source projects can be an excellent way to gain real-world experience working on large codebases. You'll learn how to collaborate with other developers, follow coding standards, work with version control systems, and understand how to integrate your code into an existing codebase.

  3. Work on Personal Projects: Personal projects allow you to explore your interests and experiment with different technologies and frameworks. You can simulate real-world scenarios, such as integrating with third-party APIs, handling user authentication, implementing database interactions, and deploying your application.

  4. Study Design Patterns and Best Practices: Understanding design patterns and best practices for software development is crucial for working on large projects. Study topics like SOLID principles, design patterns (e.g., Factory, Observer, Singleton), code organization, and architectural patterns (e.g., MVC, MVVM).

  5. Practice Test-Driven Development (TDD): TDD is a software development practice that emphasizes writing tests before writing the actual code. This approach helps ensure code quality, maintainability, and facilitates refactoring. Practicing TDD will help you develop a mindset for writing testable and modular code, which is essential for large projects.

  6. Participate in Coding Challenges or Hackathons: While coding challenges on platforms like CodeWars and LeetCode are useful for practicing algorithms and data structures, participating in coding challenges or hackathons that involve building complete applications or working on larger problems can provide a more realistic experience.

  7. Collaborate with Other Developers: Working on projects with other developers can help you learn about code reviews, merging conflicts, communication, and team collaboration – skills that are essential for large projects.

  8. Learn Version Control Systems: Mastering version control systems like Git is crucial for working on large projects, especially in a team environment. Practice branching, merging, resolving conflicts, and using tools like GitHub or GitLab.

  9. Study Software Architecture and Design: Understand software architecture principles, such as layered architecture, microservices, and event-driven architecture. Study design techniques like domain-driven design and clean architecture, which can help you create maintainable and scalable systems.

  10. Practice Refactoring and Code Optimization: As projects grow larger, the need for refactoring and code optimization becomes more prevalent. Practice techniques like code refactoring, performance optimization, and identifying and resolving bottlenecks.

Remember, the key to preparing for large projects is to expose yourself to various aspects of software development, including project management, collaboration, testing, deployment, and maintenance. By combining coding challenges with practical experience, you'll develop a well-rounded skill set that will prepare you for the challenges of working on large-scale projects.

 

I'm excited to see the new meme browsing interface feature in PieFed. I expected PieFed to be yet another Reddit clone using a different software stack and without any innovation. I believe there's an opportunity to take things a step further by blending the best elements of platforms like Reddit and image boards like Safebooru.

I wish there was a platform that was a mix between Reddit and image boards like Safebooru. The problem I have with Reddit is the time-consuming process of posting content; I should be able to post something in a few seconds, but often finding the right community takes longer than actually posting, and you have to decide whether to post in every relevant community or just the one that fits best. In the case of Lemmy, the existence of multiple similar communities across different instances makes this issue even worse.

I like how image boards like Safebooru offer a streamlined posting experience, allowing users to share content within seconds. The real strength of these platforms lies in their curation and filtering capabilities. Users can post and curate content, and others can contribute to the curation process by adding or modifying tags. Leaderboards showcasing top taggers, posters, and commenters promote active participation and foster a sense of community. Thanks to the comprehensive tagging system, finding previously viewed content becomes a breeze, unlike the challenges often faced on Reddit and Lemmy. Users can easily filter out unwanted content by hiding specific tags, something that would require blocking entire communities on platforms like Lemmy.

However, image boards also have their limitations. What I don't like about image boards is that they are primarily suited for image-based content and often lack robust text discussion capabilities or threaded comments, which are essential for fostering meaningful conversations.

Ideally, I envision a platform that combines the best of both worlds: the streamlined posting experience of image boards with the robust text discussion capabilities of platforms like Reddit and Lemmy.

I would be thrilled to contribute to a platform that considered some of the following features:

I would also like to see more community-driven development, asking users for feedback periodically in a post, and publicly stating what features devs will be working on. Code repositories issue trackers have some limitations. A threaded tree-like comment system is better for discussions, and having upvotes/downvotes helps surface the best ideas. I propose using a lemmy community as the issue tracker instead.

 

I'm excited to see the new meme browsing interface feature in PieFed. I expected PieFed to be yet another Reddit clone using a different software stack and without any innovation. I believe there's an opportunity to take things a step further by blending the best elements of platforms like Reddit and image boards like Safebooru.

I wish there was a platform that was a mix between Reddit and image boards like Safebooru. The problem I have with Reddit is the time-consuming process of posting content; I should be able to post something in a few seconds, but often finding the right community takes longer than actually posting, and you have to decide whether to post in every relevant community or just the one that fits best. In the case of Lemmy, the existence of multiple similar communities across different instances makes this issue even worse.

I like how image boards like Safebooru offer a streamlined posting experience, allowing users to share content within seconds. The real strength of these platforms lies in their curation and filtering capabilities. Users can post and curate content, and others can contribute to the curation process by adding or modifying tags. Leaderboards showcasing top taggers, posters, and commenters promote active participation and foster a sense of community. Thanks to the comprehensive tagging system, finding previously viewed content becomes a breeze, unlike the challenges often faced on Reddit and Lemmy. Users can easily filter out unwanted content by hiding specific tags, something that would require blocking entire communities on platforms like Lemmy.

However, image boards also have their limitations. What I don't like about image boards is that they are primarily suited for image-based content and often lack robust text discussion capabilities or threaded comments, which are essential for fostering meaningful conversations.

Ideally, I envision a platform that combines the best of both worlds: the streamlined posting experience of image boards with the robust text discussion capabilities of platforms like Reddit and Lemmy.

I would be thrilled to contribute to a platform that considered some of the following features:

I would also like to see more community-driven development, asking users for feedback periodically in a post, and publicly stating what features devs will be working on. Code repositories issue trackers have some limitations. A threaded tree-like comment system is better for discussions, and having upvotes/downvotes helps surface the best ideas. I propose using a lemmy community as the issue tracker instead.

 

I'm excited to see the new meme browsing interface feature in PieFed. I expected PieFed to be yet another Reddit clone using a different software stack and without any innovation. I believe there's an opportunity to take things a step further by blending the best elements of platforms like Reddit and image boards like Safebooru.

I wish there was a platform that was a mix between Reddit and image boards like Safebooru. The problem I have with Reddit is the time-consuming process of posting content; I should be able to post something in a few seconds, but often finding the right community takes longer than actually posting, and you have to decide whether to post in every relevant community or just the one that fits best. In the case of Lemmy, the existence of multiple similar communities across different instances makes this issue even worse.

I like how image boards like Safebooru offer a streamlined posting experience, allowing users to share content within seconds. The real strength of these platforms lies in their curation and filtering capabilities. Users can post and curate content, and others can contribute to the curation process by adding or modifying tags. Leaderboards showcasing top taggers, posters, and commenters promote active participation and foster a sense of community. Thanks to the comprehensive tagging system, finding previously viewed content becomes a breeze, unlike the challenges often faced on Reddit and Lemmy. Users can easily filter out unwanted content by hiding specific tags, something that would require blocking entire communities on platforms like Lemmy.

However, image boards also have their limitations. What I don't like about image boards is that they are primarily suited for image-based content and often lack robust text discussion capabilities or threaded comments, which are essential for fostering meaningful conversations.

Ideally, I envision a platform that combines the best of both worlds: the streamlined posting experience of image boards with the robust text discussion capabilities of platforms like Reddit and Lemmy.

I would be thrilled to contribute to a platform that considered some of the following features:

I would also like to see more community-driven development, asking users for feedback periodically in a post, and publicly stating what features devs will be working on. Code repositories issue trackers have some limitations. A threaded tree-like comment system is better for discussions, and having upvotes/downvotes helps surface the best ideas. I propose using a lemmy community as the issue tracker instead.

18
submitted 3 months ago* (last edited 3 months ago) by [email protected] to c/[email protected]
 

Things got heated on the piracy community at lemmy.dbzer0.com when the admin, db0, announced plans to use a GenerativeAI tool to rotate the community's banner daily with random images.

While some praised the creative idea, others strongly objected, arguing that AI-generated art lacks soul and meaning. A heated debate ensued over the artistic merits of AI art versus human-created art.

One user threatened to unsubscribe from the entire instance over the "wasteful BS" of randomly changing the banner every day. The admin defended the experiment as a fun way to inject randomness and chaos.

Caught in the crossfire were arguments about corporate ties to AI image generators, electricity waste, and whether the banner switch-up even belonged on a piracy community in the first place.

In the end, the admin stubbornly insisted on moving forward with the AI banner rotation, leaving unhappy users to either embrace the chaotic visuals or jump ship. Such is the drama and controversy that can emerge from a seemingly innocuous banner change!

— Claude, Anthropic AI

[–] [email protected] 0 points 3 months ago* (last edited 3 months ago) (1 children)

Is this relevant for executables only or also movies, TV series, game ISOs, books, audiobooks, etc?

[–] [email protected] 17 points 3 months ago* (last edited 3 months ago) (5 children)

For watching movies and shows there is nothing simpler than Stremio with the Torrentio addon.

For other files I don't like the BitTorrent protocol for sharing because it complicates file sharing, requiring files to remain in their original locations without renaming when creating a torrent. I prefer Nicotine+ and Fopnu as they allow me to easily select folders for sharing without any complications. These are the only programs I know of that are compatible with Windows and Linux and are actively developed. They are newbie friendly because searching for files inside the program is straightforward. I've also used eMule, Gnutella, DC++, Shareaza, MLDonkey but I don't recommend them. Although eMule has good availability for old/rare content.

[–] [email protected] 1 points 3 months ago* (last edited 3 months ago)

This wouldn't work because of the way it deletes everything when making space on a filesystem. It should just move enough files to make space for the ones it's moving there. Even if this script worked, I would need a lot of testing before I were confident using it to reorganize my actual data.

Also, to copy the directory structure I use:

rsync -av -f"+ */" -f"- *" "/mnt/hdd1" "/mnt/hdd2" &
[–] [email protected] 1 points 3 months ago* (last edited 3 months ago) (1 children)

To accomplish this task, we can use a combination of Bash scripting and various Linux utilities such as find, mv, du, and sort. Here's a step-by-step approach:

  1. Create a bash script file, let's call it reorganize.sh.
#!/bin/bash

# Define the mount points and the desired free space threshold
declare -a MOUNT_POINTS=("/mnt/hdd0" "/mnt/hdd1" "/mnt/hdd2" ...)
FREE_SPACE_THRESHOLD=20  # In gigabytes

# Function to get the available space on a file system
get_available_space() {
    local mount_point="$1"
    df --output=avail "$mount_point" | tail -n 1 | awk '{print $1}'
}

# Create an associative array to keep track of the available space on each file system
declare -A AVAILABLE_SPACE
for mount_point in "${MOUNT_POINTS[@]}"; do
    AVAILABLE_SPACE["$mount_point"]=$(get_available_space "$mount_point")
done

# Function to find the mount point with the most available space
find_mount_point_with_most_space() {
    local max_space=0
    local max_mount_point=""

    for mount_point in "${!AVAILABLE_SPACE[@]}"; do
        if [ "${AVAILABLE_SPACE[$mount_point]}" -gt "$max_space" ]; then
            max_space="${AVAILABLE_SPACE[$mount_point]}"
            max_mount_point="$mount_point"
        fi
    done

    echo "$max_mount_point"
}

# Function to redistribute files based on alphabetical order
redistribute_files() {
    local source_dir="$1"
    local target_dir="$2"

    # Get a list of files sorted alphabetically by relative path
    local files=$(find "$source_dir" -type f | sort)

    # Move files to the target directory
    while read -r file; do
        local relative_path=$(realpath --relative-to="$source_dir" "$file")
        local file_size=$(du -k "$file" | cut -f 1)

        # Update available space on source and target directories
        AVAILABLE_SPACE["$source_dir"]=$((AVAILABLE_SPACE["$source_dir"] + file_size))
        AVAILABLE_SPACE["$target_dir"]=$((AVAILABLE_SPACE["$target_dir"] - file_size))

        mkdir -p "$target_dir/$(dirname "$relative_path")"
        mv "$file" "$target_dir/$relative_path"
    done <<< "$files"
}

# Function to move files from one file system to another if the free space is below the threshold
move_files_if_needed() {
    local source_dir="$1"
    local temp_dir=$(mktemp -d --tmpdir="$(find_mount_point_with_most_space)")

    if [ "${AVAILABLE_SPACE[$source_dir]}" -lt "$((FREE_SPACE_THRESHOLD * 1024 * 1024))" ]; then
        # If the available space on the source directory is below the threshold,
        # move the files to a temporary directory on the mount point with the most available space
        redistribute_files "$source_dir" "$temp_dir"
        rm -rf "$source_dir"/*
        # Move the files back to the source directory, preserving the alphabetical order
        redistribute_files "$temp_dir" "$source_dir"
        rmdir "$temp_dir"
    fi
}

# Main script
for ((i=0; i<${#MOUNT_POINTS[@]}; i++)); do
    local mount_point="${MOUNT_POINTS[$i]}"
    # Check if the current mount point needs to be cleared to meet the free space threshold
    move_files_if_needed "$mount_point"
    # Redistribute files from the current mount point to the remaining mount points in alphabetical order
    for ((j=i+1; j<${#MOUNT_POINTS[@]}; j++)); do
        redistribute_files "$mount_point" "${MOUNT_POINTS[$j]}"
    done
done
  1. Save the script file and make it executable:
chmod +x reorganize.sh
  1. Run the script:
./reorganize.sh
 

I have several file systems mounted on /mnt/hdd0, /mnt/hdd1, /mnt/hdd2, etc. on my Manjaro Linux system. Let's call these file systems fs0, fs1, fs2, and so on. Currently, all of these file systems have the same directory structure and contain files.

My goal is to reorganize and redistribute all the files across the drives based on the alphabetical ordering of their relative file paths, following these rules:

  1. The files with the earliest alphabetical relative paths should be moved to the first mounted file system in alphabetical order (e.g. /mnt/hdd0) which contains the appropriate folder structure.
  2. The files with the latest alphabetical relative paths should be moved to the last mounted file system in alphabetical order (e.g. /mnt/hddN) which contains the appropriate folder structure.
  3. The remaining files should be distributed across the remaining mounted file systems (/mnt/hdd1, /mnt/hdd2, etc.) in alphabetical order based on their relative paths.

The process should fill up each file system in alphabetical order (/mnt/hdd0, /mnt/hdd1, etc.) until the given free space threshold is reached before moving on. While reorganizing, I want to ensure each file system maintains a specified amount of free space remaining. If a file system does not have enough free space for the files being moved to it, I want to move enough existing files from that file system to another file system with the most available space to make room.

I need step-by-step instructions, including commands, scripts, or tools in Manjaro Linux, to automate this entire process of reorganizing and redistributing the files while following the rules mentioned above.

[–] [email protected] 5 points 3 months ago

Nice!

A quick suggestion. I personally would prefer if you mention what you will be working on rather than what you've completed. It creates more excitement to know what's coming next. I also find it easier to understand issues than pull requests.

[–] [email protected] 1 points 3 months ago

More critically, the proof-of-concept so far appears to lack any real work on moderation tools or implementing a web of trust system. These would be absolutely vital components for a federated encyclopedia to have any chance of controlling quality and avoiding descending into a sea of misinformation and edit wars between conflicting "truths." Centralized oversight and clear enforced guidelines are key reasons why Wikipedia has been relatively successful, despite its flaws.

Without a robust distributed moderation system in place, a federated encyclopedia runs the risk of either devolving into siloed echo chambers pushing various agendas, or becoming an uncoordinated mess making it impractical as a general reference work. The technical obstacles around federating content policies, privileges and integrated quality control across instances are immense challenges that aren't obviously addressed by this early proof-of-concept.

While novel approaches like federation are worth exploring, straying too far from Wikipedia's principles of neutral point-of-view and community-driven policies could easily undermine the entire premise. Lofty goals of disrupting Wikipedia are admirable, but successfully replacing its dominance as a general reference work seems extremely unlikely without solving these fundamental issues around distributed content governance first.

[–] [email protected] 1 points 3 months ago* (last edited 3 months ago)

To recover the file system, files, and directory structure from an ext4 drive that you formatted to NTFS, you'll need to use data recovery tools designed for this purpose. Here are the general steps you can follow:

  1. Stop using the drive immediately: Once you realize you need to recover data from the formatted drive, stop using it and avoid writing any new data to it. This will increase your chances of successful recovery.

  2. Create an image of the drive: It's recommended to create a byte-by-byte image of the entire drive using a tool like ddrescue or dd. This will ensure that you're working with an exact copy of the drive, preserving all existing data.

  3. Replicate the loss of data on a smaller drive (optional): If you're working with a large-capacity drive and want to speed up the recovery process for testing purposes, you can replicate the loss of data on a smaller drive. This will allow you to work with a more manageable drive size while testing the recovery process.

  4. Use a data recovery tool: There are several data recovery tools available for Linux that can scan the drive image and attempt to recover files and directory structures from the previously existing ext4 file system. Some popular options include:

  • TestDisk: A powerful command-line tool that can recover lost partitions and repair file systems.
  • PhotoRec: A data recovery tool designed to recover lost files from hard disks, CD-ROMs, and other media.
  • extundelete: A command-line tool specifically designed to recover deleted files from ext4 file systems.
  • Foremost: A data recovery tool that can recover files based on their headers, footers, and data structures.
  • ext4magic: A tool to recover deleted or overwritten files on ext3 and ext4 file systems.
  • Scalpel: A file carving and indexing application that allows you to specify headers and footers to recover file types from a storage media.
  • ddrutility: A complement to GNU ddrescue that finds which files are related to bad sectors and provides special tools for NTFS (no longer actively supported).
  • dvdisaster: A tool that provides additional error protection for CD/DVD media.
  • xfs_undelete: A tool that traverses the inode B+trees of each allocation group and tries to recover all files marked as deleted on an XFS file system.
  1. Scan the drive image: Use the chosen data recovery tool to scan the drive image. Most tools will allow you to specify the type of file system you're trying to recover (in this case, ext4).

  2. Recover files and directories: After the scan is complete, the tool should present you with a list of recoverable files and directories. You can then select the ones you want to recover and specify a location to save them.

  3. Verify recovered data: Once the recovery process is complete, verify that the recovered files and directories are intact and usable.

It's worth noting that the success of data recovery greatly depends on the extent of the formatting process and whether any new data was written to the drive after formatting. The sooner you attempt recovery, the better your chances of success.

Additionally, if the recovered data is particularly important or valuable, you may want to consider consulting a professional data recovery service, as they often have more advanced tools and expertise to handle complex recovery scenarios.

[–] [email protected] 1 points 3 months ago

The error message "The disk structure is corrupted and unreadable" indicates that there is a problem with the file system or the disk itself, which is preventing Windows from accessing the drive. The Master File Table (MFT) is a critical component of the NTFS file system, and if it's corrupt, the system cannot access files on the drive. You've already attempted to use the chkdsk utility, which is the right first step, but it has failed to repair the MFT.

To recover the files from the drive, you can try the following methods:

Use Data Recovery Software

Since chkdsk was unable to fix the MFT, you can use data recovery software to try and recover your files. EaseUS Data Recovery Wizard is recommended by one of the sources and is known for its ability to recover data from corrupted drives[2]. Follow these steps:

  1. Download and install EaseUS Data Recovery Wizard.
  2. Launch the software and select the drive with the corrupted MFT.
  3. Click "Scan" to start the scanning process.
  4. Once the scan is complete, preview the recoverable files.
  5. Select the files you want to recover and save them to a different drive.

Use the FixMbr Command

Another approach is to use the bootrec.exe command with the /FixMbr parameter to repair the Master Boot Record, which might indirectly help with MFT issues[2]. To do this:

  1. Boot from a Windows installation media.
  2. Choose "Repair your computer" and then "Command Prompt".
  3. Type bootrec.exe /FixMbr and press Enter.

Format the Drive

If the above methods do not work and you cannot recover your files, the last resort is to format the drive, which will erase all data. Before doing this, ensure that you have recovered as much data as possible using data recovery software. To format the drive:

  1. Open Disk Management by pressing Windows + X and selecting it from the list.
  2. Right-click on the drive and select "Format".
  3. Choose NTFS as the file system and complete the format process[4].

Additional Tips

  • Before attempting recovery, it's crucial to stop using the drive to avoid overwriting any recoverable data.
  • If the drive is an external one, try unplugging and replugging it into a different port or computer to rule out connection issues[1].
  • Running hardware and device troubleshooter might help if the issue is related to drivers or hardware[1].
  • If you are not comfortable with these steps or if they do not work, consider contacting a professional data recovery service[1].

Remember, these methods are not guaranteed to recover all your data, and there is a risk of data loss. If the data is extremely important, it's often best to consult with a professional data recovery service before proceeding with any recovery attempts.

Citations: [1] https://www.salvagedata.com/fix-disk-structure-is-corrupted-and-unreadable/ [2] https://www.easeus.com/data-recovery/fix-corrupt-master-file-table-error-without-losing-data.html [3] https://forums.tomshardware.com/threads/master-file-table-corrupt-chkdsk-fails.3712756/ [4] https://www.stellarinfo.com/blog/disk-structure-is-corrupted-and-unreadable/ [5] https://www.partitionwizard.com/disk-recovery/corrupt-master-file-table.html [6] https://www.anyrecover.com/hard-drive-recovery-data/fix-corrupt-master-file-table-error/ [7] https://www.reddit.com/r/datarecovery/comments/fo9o5f/the_disk_structure_is_corrupted_and_unreadable/?rdt=48629 [8] https://recoverit.wondershare.com/file-recovery/fix-corrupt-master-file-table-error.html [9] https://superuser.com/questions/688367/external-hard-disk-is-not-accessible-the-disk-structure-is-corrupted-and-unrea [10] https://answers.microsoft.com/en-us/windows/forum/all/windows-cannot-recover-master-file-table-chkdsk/ecb68215-7329-4006-9f70-2d51f610a27f [11] https://www.youtube.com/watch?v=qKQ5EejHarU [12] https://www.techrepublic.com/forums/discussions/master-file-table-recovery/ [13] https://7datarecovery.com/blog/disk-structure-corrupted-and-unreadable/ [14] https://www.diskpart.com/articles/windows-cannot-recover-master-file-table-0310.html [15] https://www.partitionwizard.com/partitionmagic/disk-structure-corrupt-unreadable.html [16] https://www.stellarinfo.com/blog/fix-corrupt-master-file-table-error/ [17] https://4ddig.tenorshare.com/hard-drive/fix-the-disk-structure-is-corrupted-and-unreadable.html [18] https://7datarecovery.com/blog/corrupt-master-file-table/

[–] [email protected] 1 points 3 months ago

The error messages you're receiving indicate that the NTFS file system on the drive has become corrupted, and the master file table (MFT) – which is a critical component that stores metadata about files and directories – is damaged. Unfortunately, when the MFT is severely corrupted, Windows' built-in chkdsk utility is often unable to repair it.

However, there are third-party data recovery tools that may be able to recover your files from the corrupted NTFS drive. These tools use advanced algorithms and techniques to scan the drive sector by sector, searching for file signatures and reconstructing the file system structure.

Here are a few recommended data recovery tools you could try:

  1. TestDisk and PhotoRec (Free, open-source): TestDisk is a powerful tool that can attempt to repair damaged file systems, including NTFS. If it fails to repair the file system, you can use its companion tool PhotoRec to recover files directly from the drive.

  2. Stellar Data Recovery (Paid): Stellar Data Recovery is a commercial data recovery suite that includes tools for recovering data from corrupted NTFS drives. It has a solid reputation and decent success rates.

  3. EaseUS Data Recovery Wizard (Paid): Another popular commercial data recovery tool that can recover data from corrupted NTFS drives.

  4. R-Studio (Paid): A professional-grade data recovery tool that can handle various types of file system corruption and data loss scenarios.

Before using any data recovery tool, it's crucial to create a byte-by-byte copy (image) of the corrupted drive first. This will ensure that the tool works on a copy rather than the original drive, reducing the risk of further data loss. You can use tools like ddrescue (Linux/macOS) or ImageX (Windows) to create a drive image.

Once you have a drive image, you can attempt data recovery using the tool of your choice. Keep in mind that the success rate of data recovery depends on the extent of the corruption and the tool's capabilities.

[–] [email protected] 1 points 3 months ago

I accidentally formatted an ext4 partition to NTFS in my Ubuntu 16.04 recently and was able to recover the full partition successfully by running a file system check.

sudo fsck.ext4 -vy /dev/sda10

I recorded the steps in this blog post. However note that the scenario is a bit different. Hope this helps someone else.

https://superuser.com/a/1204121

Recovering Accidentally Formatted ext4 Partition / Fixing Superblockrajind.dev

Recovering Accidentally Formatted ext4 Partition / Fixing Superblock

Published by Rajind Ruparathna 5–6 minutes

Today, I made the silly mistake of accidentally formatting one of the ext4 partitions in my Ubuntu 16.04 machine to NTFS instead of formatting the pen drive, which I was hoping to format. So if you are reading this most probably you have done something similar or perhaps someone you know has gone down that path and you maybe trying to help him/her.

Fortunately I was able to recover my partition completely and in this post I’ll go through the few things that helped me recover my partition. There is still hope my friend. 🙂

First I must thank Shane for this askubuntu answer, the author of this blog post for giving pointers and my friend Janaka for helping me out. Have a look at those two posts as they are very helpful.

If you accidentally formatted your partition (or in any case of lost partitions or data), the most important thing is avoiding any data writes on that partition. Another important thing is creating a backup image of the messed up disk.

Quoting Shane,

If your messed up drive is sda, and you wanted to store the image in yourname’s home directory for instance: dd if=/dev/sda of=/home/yourname/sda.img.bak bs=512

to restore the image after a failed recovery attempt: dd if=/home/yourname/sda.img.bak of=/dev/sda bs=512

You could of course use /dev/sda1 if you are only interested in the first partition, but as some of these utilities alter the partition table, it is perhaps a better idea to image the whole disk..

Also, if you are using dd for a large operation, it is very helpful to see a progress bar, for which you can use a utility called pv which reports progress of data through a pipeline

for instance: pv -tpreb /dev/sda | dd of=/home/yourname/sda.img.bak bs=512

First of all I tried TestDisk tool. I was able to get some files recovered but I wasn’t able to find a way to recover the whole partition using TestDisk tool.

Then I started following the other blog post I shared above. In their first thing was identifying the affected partition. I already knew mine, however if you want to get info on that you can run sudo fdisk -l command. Output for me was as follows.

fdisk-output

Now the idea for the next step is that, since I did not write anything on the formatted disk, my previous ext file system data should be still there. In ext, file system data are kept in a record called Superblock which keeps the characteristics of a filesystem, including its size, the block size, the empty and the filled blocks and their respective counts, the size and location of the inode tables, the disk block map and usage information, and the size of the block groups. (You can read more about it here if you are interested). So what what we are trying to do here is to fix the ext file system.

In my case, I was able to do it with the following file system check command. (Note that the same command is there for ext2 and ext3 as well). Before you run the command, make sure the partition is unmounted.

sudo fsck.ext4 -v /dev/xxx

First part of the output for me was as follows. I was able to see the original partition name (“Personal”) which I had previously given to my ext4 partition and it gave me a slight relief. So if you are able to see the same, hopefully things will turn out better.

fcsk-output-part1

At the bottom of the above screen capture, you could see a prompt asking it it is okay to fix certain blocks count. I went with yes here and it was actually the only option for me to go as well. There were few more prompts of the similar manner and seemed like a lot more is coming. So I stopped the command and went with the following which basically says yes to all prompts.

sudo fsck.ext4 -vy /dev/sda10

Then after a number of fix prompts, I got the following output.

fsck-output-success

When I mount the partition everything was back to normal. Hopefully you will be able to recover yours as well. Please note that this might not be the recovery method for all the scenarios. I’m just noting this down hoping that this will help someone else like me. Make sure you understand the steps well before doing these.

EDIT (03/09/2017):

If you have a dual boot set-up, once you boot up the other OS (e.g. Windows) you might lose the partition again since in the other OS the partitioning might be different. So it’s better to make sure to get a backup once the recovery is done.

Further even if you lose the partition again once you login to the other OS, you can still recover using the fcsk.ext4 command.

Cheers! 🙂

~ Rajind Ruparathna

3
submitted 3 months ago* (last edited 3 months ago) by [email protected] to c/[email protected]
 

I've moved all files except one from the original ext4 drive (A) to a newly formatted NTFS drive at destination (B), and then I've formatted the drive A by going to the Disk Management on Windows, deleting the volume and the make a new simple volume with a quick format.

After restarting I found out that the destination drive was corrupted.

On Windows I got

Location is not available

G:\ is not accessible.
The disk structure is corrupted and unreadable.

On Windows on drive B I used chkdsk g: /f /r /x from an admin cmd but it says:

Corrupt master file table. Windows will attempt to recover master file table from disk.
Windows cannot recover master file table. CHKDSK aborted.

On drive A I've used sudo fsck.ext4 -vy /dev/xxx to recover the ext4 file system. I got a single file back which was the only one I didn't move.

How should I attempt to recover the files in the drive?

The first software I tried on drive B was Recuva. I got a Warning saying

Failed to scan the following drives:
A:: Invalid data run detected

Resources

view more: next ›