Kubernetes

150 readers
2 users here now

founded 1 year ago
MODERATORS
26
27
28
29
30
31
32
33
 
 

And interesting project to have a look.

34
 
 

Gorilla-CLI converts NLP into commands. No OpenAI keys needed!

https://github.com/gorilla-llm/gorilla-cli

Today, I wanted to patch my nodelocaldns daemon set to not run on Fargate nodes. Of course I don’t remember the schema for patching with specific instructions. So, I asked Gorilla

$ gorilla show me how to patch a daemonset using kubectl to add nodeaffinity that matches expression eks.amazonaws.com/compute-type notin Fargate

Gorilla responded with:

kubectl -n kube-system patch daemonset node-local-dns --patch '{"spec": {"template": {"spec": {"affinity": {"nodeAffinity": {"requiredDuringSchedulingIgnoredDuringExecution": {"nodeSelectorTerms": [{"matchExpressions": [{"key": "eks.amazonaws.com/compute-type","operator": "NotIn","values": ["fargate"]}]}]}}}}}}'

Close enough! It just missed a trailing '}'

Really impressed.

35
 
 

Look, I get it. Docker started the whole movement. But if you're an OSS software vender, do your users a solid: don't use Docker hub for image hosting. Between ghcr.io (GitHub), Quay, and others, there are plenty of free choices that don't have rate limits on users. Unless you want Docker to get subscription, FOSS projects should use places that don't rate linit

36
 
 

I'd love to hear some stories about how you or your organization is using Kubernetes for development! My team is experimenting with using it because our "platform" is getting into the territory of too large to run or manage on a single developer machine. We've previously used Docker Compose to enable starting things up locally, but that started getting complicated.

The approach we're trying now is to have a Helm chart to deploy the entire platform to a k8s namespace unique to each developer and then using Telepresence to connect a developer's laptop to the cluster and allow them to run specific services they're working on locally.

This seems to be working well, but now I'm finding myself concerned with resource utilization in the cluster as devs don't remember to uninstall or scale down their workloads when they're not active any more, leading to inflation of the cluster size.

Would love to hear some stories from others!

37
 
 

Although it infantilizes k8s quite a bit, this video REALLY helped me when I started my cloud native journey

38
 
 

cross-posted from: https://lemmy.run/post/10475

Testing Service Accounts in Kubernetes

Service accounts in Kubernetes are used to provide a secure way for applications and services to authenticate and interact with the Kubernetes API. Testing service accounts ensures their functionality and security. In this guide, we will explore different methods to test service accounts in Kubernetes.

1. Verifying Service Account Existence

To start testing service accounts, you first need to ensure they exist in your Kubernetes cluster. You can use the following command to list all the available service accounts:

kubectl get serviceaccounts

Verify that the service account you want to test is present in the output. If it's missing, you may need to create it using a YAML manifest or the kubectl create serviceaccount command.

2. Checking Service Account Permissions

After confirming the existence of the service account, the next step is to verify its permissions. Service accounts in Kubernetes are associated with roles or cluster roles, which define what resources and actions they can access.

To check the permissions of a service account, you can use the kubectl auth can-i command. For example, to check if a service account can create pods, run:

kubectl auth can-i create pods --as=system:serviceaccount:<namespace>:<service-account>

Replace <namespace> with the desired namespace and <service-account> with the name of the service account.

3. Testing Service Account Authentication

Service accounts authenticate with the Kubernetes API using bearer tokens. To test service account authentication, you can manually retrieve the token associated with the service account and use it to authenticate requests.

To get the token for a service account, run:

kubectl get secret <service-account-token-secret> -o jsonpath="{.data.token}" | base64 --decode

Replace <service-account-token-secret> with the actual name of the secret associated with the service account. This command decodes and outputs the service account token.

You can then use the obtained token to authenticate requests to the Kubernetes API, for example, by including it in the Authorization header using tools like curl or writing a simple program.

4. Testing Service Account RBAC Policies

Role-Based Access Control (RBAC) policies govern the access permissions for service accounts. It's crucial to test these policies to ensure service accounts have the appropriate level of access.

One way to test RBAC policies is by creating a Pod that uses the service account you want to test and attempting to perform actions that the service account should or shouldn't be allowed to do. Observe the behavior and verify if the access is granted or denied as expected.

5. Automated Testing

To streamline the testing process, you can create automated tests using testing frameworks and tools specific to Kubernetes. For example, the Kubernetes Test Framework (KTF) provides a set of libraries and utilities for writing tests for Kubernetes components, including service accounts.

Using such frameworks allows you to write comprehensive test cases to validate service account behavior, permissions, and RBAC policies automatically.

Conclusion

Testing service accounts in Kubernetes ensures their proper functioning and adherence to security policies. By verifying service account existence, checking permissions, testing authentication, and validating RBAC policies, you can confidently use and rely on service accounts in your Kubernetes deployments.

Remember, service accounts are a critical security component, so it's important to regularly test and review their configuration to prevent unauthorized access and potential security breaches.

39
 
 

cross-posted from: https://lemmy.run/post/10206

Creating a Helm Chart for Kubernetes

In this tutorial, we will learn how to create a Helm chart for deploying applications on Kubernetes. Helm is a package manager for Kubernetes that simplifies the deployment and management of applications. By using Helm charts, you can define and version your application deployments as reusable templates.

Prerequisites

Before we begin, make sure you have the following prerequisites installed:

  • Helm: Follow the official Helm documentation for installation instructions.

Step 1: Initialize a Helm Chart

To start creating a Helm chart, open a terminal and navigate to the directory where you want to create your chart. Then, run the following command:

helm create my-chart

This will create a new directory named my-chart with the basic structure of a Helm chart.

Step 2: Customize the Chart

Inside the my-chart directory, you will find several files and directories. The most important ones are:

  • Chart.yaml: This file contains metadata about the chart, such as its name, version, and dependencies.
  • values.yaml: This file defines the default values for the configuration options used in the chart.
  • templates/: This directory contains the template files for deploying Kubernetes resources.

You can customize the chart by modifying these files and adding new ones as needed. For example, you can update the Chart.yaml file with your desired metadata and edit the values.yaml file to set default configuration values.

Step 3: Define Kubernetes Resources

To deploy your application on Kubernetes, you need to define the necessary Kubernetes resources in the templates/ directory. Helm uses the Go template language to generate Kubernetes manifests from these templates.

For example, you can create a deployment.yaml template to define a Kubernetes Deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ .Release.Name }}-deployment
spec:
  replicas: {{ .Values.replicaCount }}
  template:
    metadata:
      labels:
        app: {{ .Release.Name }}
    spec:
      containers:
        - name: {{ .Release.Name }}
          image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
          ports:
            - containerPort: {{ .Values.containerPort }}

This template uses the values defined in values.yaml to customize the Deployment's name, replica count, image, and container port.

Step 4: Package and Install the Chart

Once you have defined your Helm chart and customized the templates, you can package and install it on a Kubernetes cluster. To package the chart, run the following command:

helm package my-chart

This will create a .tgz file containing the packaged chart.

To install the chart on a Kubernetes cluster, use the following command:

helm install my-release my-chart-0.1.0.tgz

Replace my-release with the desired release name and my-chart-0.1.0.tgz with the name of your packaged chart.

Conclusion

Congratulations! You have learned how to create a Helm chart for deploying applications on Kubernetes. By leveraging Helm's package management capabilities, you can simplify the deployment and management of your Kubernetes-based applications.

Feel free to explore the Helm documentation for more advanced features and best practices.

Happy charting!

40
 
 

In March 2023, Argo CD completed a refactor of the release process in order to provide a SLSA Level 3 provenance for container images and CLI binaries. The CNCF also commissioned a security audit of Argo CD which was conducted by ChainGuard. The audit found that Argo CD achieved SLSA Level 3 v0.1 across the source, build, and provenance sections.

The Argo Project will next rollout attestations to Argo Rollouts, then follow with the remaining projects. SLSA has recently announced the SLSA Version 1.0 specifications, which Argo plans to embrace.

41
42
43
44
5
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]