this post was submitted on 17 Feb 2025
97 points (97.1% liked)

Technology

63695 readers
3046 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 58 points 2 weeks ago* (last edited 2 weeks ago) (9 children)

to;dr: It’s not Deepseek the model, it’s their app and its privacy policy.

[–] [email protected] 17 points 2 weeks ago* (last edited 2 weeks ago) (8 children)

DeepSeek the self-hosted model is pretty decent even as distilled down to 8b, but I always ensure i get an abliterated version to remove all the Chinese censorship (and also built-in OpenAI censorship given the actual history of how the model was actually developed).

[–] UndulyUnruly 4 points 2 weeks ago (1 children)

ensure i get an abliterated version

Could you expand on this re how, please and thank you.

[–] [email protected] 16 points 2 weeks ago* (last edited 2 weeks ago) (2 children)
[–] UndulyUnruly 2 points 2 weeks ago
[–] hummingbird 2 points 2 weeks ago (1 children)

Learned something new. Thank you!

[–] [email protected] 3 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

You can also run your own fancy front-end and host your own GPT website (locally).

[–] [email protected] 1 points 2 weeks ago

I'm doing that with docker compose in my homelab, it's pretty neat!

services:
  ollama:
    volumes:
      - /etc/ollama-docker/ollama:/root/.ollama
    container_name: ollama
    pull_policy: always
    tty: true
    restart: unless-stopped
    image: ollama/ollama
    ports:
      - 11434:11434
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              device_ids: ['0']
              capabilities:
                - gpu

  open-webui:
    build:
      context: .
      args:
        OLLAMA_BASE_URL: '/ollama'
      dockerfile: Dockerfile
    image: ghcr.io/open-webui/open-webui:main
    container_name: open-webui
    volumes:
      - /etc/ollama-docker/open-webui:/app/backend/data
    depends_on:
      - ollama
    ports:
      - 3000:8080
    environment:
      - 'OLLAMA_BASE_URL=http://ollama:11434/'
      - 'WEBUI_SECRET_KEY='
    extra_hosts:
      - host.docker.internal:host-gateway
    restart: unless-stopped

volumes:
  ollama: {}
  open-webui: {}
load more comments (6 replies)
load more comments (6 replies)