Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
⌀ | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
⌀ |
---|---|---|---|
<p>How to resolve the error no module named pandas when one node (in Airflow's DAG) is successful in using it(pandas) and the other is not?</p>
<p>I am unable to deduce as to why I am getting an error no module named pandas.</p>
<p>I have checked via <code>pip3 freeze</code> and yes, the desired pandas version does show up.</p>
<p>I have deployed this using docker on a kubernetes cluster.</p>
| aviral sanjay | <p><a href="https://github.com/apache/incubator-airflow/blob/v1-10-stable/setup.py#L292" rel="nofollow noreferrer">Pandas is generally required</a>, and sometimes used in some hooks to return dataframes. Well, it's possible that Airflow was installed with <code>pip</code> and not <code>pip3</code> possibly being added as a Python 2 module and not a Python 3 module (though, using <code>pip</code> should have installed Pandas when one looks at the <a href="https://github.com/apache/incubator-airflow/blob/v1-10-stable/setup.py#L292" rel="nofollow noreferrer"><code>setup.py</code></a>).</p>
<p>Which Operator in your DAG is giving this error?
Do you have any PythonVirtualEnvironmentOperators or BashOperators running <code>python</code> from the command line (and thus possibly not sharing the same environment that you're checking has <code>pandas</code>)?</p>
| dlamblin |
<p>I tried to install ibm-eventstreams-dev v 0.1.2 into my Mac. </p>
<p>After I installed eventstreams into my Mac, there's always several pods that can't run. It includes three kafka pods: es-ibm-es-kafka-sts-0/1/2, es-ibm-es-ui-deploy-69758d9dfd-kc2zx, es-ibm-es-ui-oauth2-client-reg-pgvq6 and there also have a failed job named es-ibm-es-ui-oauth2-client-reg. </p>
<p>You can see the details in the follow images:
<a href="https://i.stack.imgur.com/Qg3MB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Qg3MB.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/n3YpQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/n3YpQ.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/h4ZBu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/h4ZBu.png" alt="enter image description here"></a></p>
<p>So I have two questions about the ibm-event-stream:</p>
<ul>
<li><p>Does ibm-eventstreams-dev only supported on ICP? Can I install it on my local environment without ICP environment?</p></li>
<li><p>How could I solve the ui pods problem in the ibm-eventstreams-dev? </p></li>
<li><p>what's wrong with the kafka pods? what's the status message "CrashLoopBackOff" means?</p></li>
</ul>
<p>My environment details:</p>
<ul>
<li>kubernetes 1.11.1</li>
<li>helm : stable 2.10.0</li>
<li>a cluster have three nodes, each nodes is a virtual macine.</li>
</ul>
<p>Please help me, Thanks a lot!</p>
| DoubleQueens | <blockquote>
<p>So I have two questions about the ibm-event-stream:<br>
Does ibm-eventstreams-dev only supported on ICP? Can I install it on my local environment without ICP environment?</p>
</blockquote>
<p>Event Streams will only run on IBM Cloud Private (ICP). That's because ICP provides more than just a Kubernetes environment. For example, authentication and user management for Event Streams is provided by the ICP platform. </p>
<p>That's what the es-ibm-es-ui-oauth2-client-reg job that is failing for you is trying to do - set up the OAuth integration with ICP. And that'll be why it failed for you in Kubernetes on your Mac - because some of the dependencies that Event Streams has will be missing. </p>
<blockquote>
<p>How could I solve the ui pods problem in the ibm-eventstreams-dev? </p>
</blockquote>
<p>I'm afraid you won't be able to fix this in just K8S on your Mac - all of the problems that you describe are a result of bits of ICP that Event Streams depends on being missing.</p>
<p>You can get a Community Edition of ICP (at no charge) from <a href="https://www.ibm.com/account/reg/us-en/signup?formid=urx-20295" rel="nofollow noreferrer">https://www.ibm.com/account/reg/us-en/signup?formid=urx-20295</a> - which would let you give it a try. </p>
| dalelane |
<p>I hope it's ok to ask for your advice.</p>
<p>The problem in a nutshell: my pipeline cannot pull private images from GHCR.IO into Okteto Kubernetes, but public images from the same private repo work.</p>
<p>I'm on Windows 10 and use WSL2-Ubuntu 20.04 LTS with kinD for development and tried minikube too.</p>
<p>I get an error in Okteto which says that the image pull is “unauthorized” -> “imagePullBackOff”.</p>
<p>Things I did:browsed Stack Overflow, RTFM, Okteto FAQ, download the Okteto kubeconfig, pulled my hair out and spent more hours than I would like to admit – still no success yet.</p>
<p>For whatever reason I cannot create a “kubectl secret” that works. When logged-in to ghcr.io via “docker login --username” I can pull private images locally.</p>
<p>No matter what I’ve tried I still get the error “unauthorized” when trying to pull a private image in Okteto.</p>
<p>My Setup with latest updates:</p>
<ul>
<li>Windows 10 Pro</li>
<li>JetBrains Rider IDE</li>
<li>WSL2-Ubuntu 20.04 LTS</li>
<li>ASP.NET Core MVC app</li>
<li>.NET 6 SDK</li>
<li>Docker</li>
<li>kinD</li>
<li>minikube</li>
<li>Chocolatey</li>
<li>Homebrew</li>
</ul>
<p>Setup kinD</p>
<pre><code>kind create cluster --name my-name
kubectl create my-namespace
// create a secret to pull images from ghcr.io
kubectl create secret docker-registry my-secret -n my-namespace --docker-username="my-username" --docker-password="my-password" --docker-email="my-email" --docker-server="https://ghcr.io"
// patch local service account
kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "my-secret"}]}'
</code></pre>
<p>kubernetes.yaml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: okteto-repo
namespace: my-namespace
spec:
replicas: 1
selector:
matchLabels:
app: okteto-repo
template:
metadata:
labels:
app: okteto-repo
spec:
containers:
- name: okteto-repo
image: ghcr.io/user/okteto-repo:latest
ports:
- containerPort: 80
imagePullSecrets:
- name: my-secret
---
apiVersion: v1
kind: Service
metadata:
name: okteto-repo
annotations:
dev.okteto.com/auto-ingress: "true"
spec:
type: ClusterIP
selector:
app: okteto-repo
ports:
- protocol: TCP
port: 8080
targetPort: 80
</code></pre>
<p>Do you have an idea why it doesn't work and what I could do?</p>
<p>Thanks a lot my dear friends, every input is highly appreciated!</p>
<p>Hope you guys have great holidays.</p>
<p>Cheers,
Michael</p>
| Michael | <p>I was able to pull a private image by doing the following:</p>
<ol>
<li>Create a personal token in GitHub with <code>repo</code> access.</li>
<li>Build and push the image to GitHub's Container registry (I used <code>okteto build -t ghcr.io/rberrelleza/go-getting-started:0.0.1</code>)</li>
<li>Download my <a href="https://okteto.com/docs/reference/cli/#update-kubeconfig" rel="noreferrer">kubeconfig credentials</a> from Okteto Cloud by running <code>okteto context update-kubeconfig</code>.</li>
<li>Create a secret with my credentials: <code>kubectl create secret docker-registry gh-regcred --docker-server=ghcr.io --docker-username=rberrelleza --docker-password=ghp_XXXXXX</code></li>
<li>Patched the default account to include the secret as an image pull secret: <code>kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "gh-regcred"}]}'</code></li>
<li>Updated the image name in the kubernetes manifest</li>
<li>Created the deployment (<code>kubectl apply -f k8s.yaml</code>)</li>
</ol>
<p>These is what my kubernetes resources looks like, in case it helps:</p>
<pre><code># k8s.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world
spec:
replicas: 1
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- image: ghcr.io/rberrelleza/go-getting-started:0.0.1
name: hello-world
---
apiVersion: v1
kind: Service
metadata:
name: hello-world
annotations:
dev.okteto.com/auto-ingress: "true"
spec:
type: ClusterIP
ports:
- name: "hello-world"
port: 8080
selector:
app: hello-world
</code></pre>
<pre><code># default SA
apiVersion: v1
imagePullSecrets:
- name: gh-regcred
- name: okteto-regcred
kind: ServiceAccount
metadata:
creationTimestamp: "2021-05-21T22:26:38Z"
name: default
namespace: rberrelleza
resourceVersion: "405042662"
uid: 2b6a6eef-2ce7-40d3-841a-c0a5497279f7
secrets:
- name: default-token-7tm42
</code></pre>
| Ramiro Berrelleza |
<p>I've been using the kubectl to upload Airflow workflows to the kubernetes (/usr/local/airflow/dags) manually. It is possible to do this without using the kubectl? by using a python script? or something else? If it's possible would you be able to share your source? or your python script? Thanks, Appreciate</p>
| Mihail | <p>This totally depends on your setup. E.G. We use AWS and so we have the DAGs syncing from an S3 bucket path every 5 minutes. We just put dags into S3. I see that some Kubernetes setups use a kind of shared volume defined by a git repository, that might also work. Airflow itself (the webserver(s), worker(s), nor scheduler) does not offer any hook to upload into the DAG directory.</p>
| dlamblin |
<p>I'm trying to deploy okteto environment on Visual Studio Code to use Remote Development on Kubernetes.</p>
<p>Following the official steps (<a href="https://okteto.com/blog/remote-kubernetes-development/" rel="nofollow noreferrer">https://okteto.com/blog/remote-kubernetes-development/</a>), I executed "Okteto: up" and selected manifest(vscode-remote-go/okteto.yml), but got this error:</p>
<pre><code>Installing dependencies...
x couldn't download syncthing, please try again
</code></pre>
<p>By changing the log level, I also got these logs:</p>
<pre><code>C:\Users\user\AppData\Local\Programs\okteto.exe up -f 'c:\Workspace\...my_project...\vscode-remote-go\okteto.yml' --remote '22100' --loglevel=debug
time="2021-09-13T14:09:32+09:00" level=info msg="starting up command"
time="2021-09-13T14:09:32+09:00" level=info msg="failed to get latest version from github: fail to get releases from github: Get \"https://api.github.com/repos/okteto/okteto/releases?per_page=5\": dial tcp: lookup api.github.com: no such host"
Installing dependencies...
time="2021-09-13T14:09:32+09:00" level=info msg="installing syncthing for windows/amd64"
time="2021-09-13T14:09:32+09:00" level=info msg="failed to download syncthing, retrying: failed to download syncthing from https://github.com/syncthing/syncthing/releases/download/v1.10.0/syncthing-windows-amd64-v1.10.0.zip: Get \"https://github.com/syncthing/syncthing/releases/download/v1.10.0/syncthing-windows-amd64-v1.10.0.zip\": dial tcp: lookup github.com: no such host"
time="2021-09-13T14:09:33+09:00" level=info msg="installing syncthing for windows/amd64"
time="2021-09-13T14:09:33+09:00" level=info msg="failed to download syncthing, retrying: failed to download syncthing from https://github.com/syncthing/syncthing/releases/download/v1.10.0/syncthing-windows-amd64-v1.10.0.zip: Get \"https://github.com/syncthing/syncthing/releases/download/v1.10.0/syncthing-windows-amd64-v1.10.0.zip\": dial tcp: lookup github.com: no such host"
time="2021-09-13T14:09:34+09:00" level=info msg="installing syncthing for windows/amd64"
time="2021-09-13T14:09:34+09:00" level=info msg="failed to upgrade syncthing: failed to download syncthing from https://github.com/syncthing/syncthing/releases/download/v1.10.0/syncthing-windows-amd64-v1.10.0.zip: Get \"https://github.com/syncthing/syncthing/releases/download/v1.10.0/syncthing-windows-amd64-v1.10.0.zip\": dial tcp: lookup github.com: no such host"
time="2021-09-13T14:09:34+09:00" level=info msg="couldn't download syncthing, please try again"
x couldn't download syncthing, please try again
</code></pre>
<p>This environment is behind my corporate proxy, and okteto.exe may not use Windows proxy setting. When I directly enter the URL (<a href="https://github.com/syncthing/syncthing/releases/download/v1.10.0/syncthing-windows-amd64-v1.10.0.zip" rel="nofollow noreferrer">https://github.com/syncthing/syncthing/releases/download/v1.10.0/syncthing-windows-amd64-v1.10.0.zip</a>) it can be downloaded using proxy.</p>
<p>Is it possible to use okteto behind proxy?</p>
| Daigo | <p>Using a proxy is not currently supported in the Okteto. We're looking into it though.</p>
<p>For now, a workaround is to manually download the syncthing binary and save it as<code>%HOME%\.okteto\syncthing.exe</code></p>
| Ramiro Berrelleza |
<p>Is there currently a way to serve websockets from an application deployed on Okteto cloud given the okteto-specific limitations around <code>Ingress</code>es and <code>Service</code>s ?</p>
<hr>
<p>I've read that this would only be possible using a <code>Service</code> or <code>Ingress</code> of type <code>LoadBalancer</code>, so that is what I've tried.</p>
<p>But, according to the <a href="https://okteto.com/docs/cloud/build" rel="nofollow noreferrer">Okteto docs</a>, <code>Service</code>s of type <code>LoadBalancer</code> (or <code>NodePort</code>) are managed. In practice they seem to get transformed automatically into a <code>ClusterIP Service</code>, + exposed to the internet on an automatic URL.</p>
<p>Do these handle only <code>HTTP</code> requests ? Or is there a way to make them handle other kinds of connections based on TCP or UDP (like websockets) ?</p>
| Nicolas Marshall | <p>You don't need a LoadBalancer to use WebSockets, they can be served from an Ingress with a ClusterIP as well (this is what Okteto Cloud uses for our endpoints). This setup supports HTTPS, WebSockets and even GRPC-based endpoints.</p>
<p><a href="https://github.com/okteto/node-websocket" rel="nofollow noreferrer">This sample</a> shows you how to use WebSockets on a Node app deployed in Okteto Cloud, hope it helps! (it uses okteto-generated Kubernetes manifests, but you can also bring your own).</p>
| Ramiro Berrelleza |
<p>I was working on Jenkins for many days and deploy my services to Kubernetes.</p>
<p>I recently came across Jenkins X, I also found a Helm chart for Jenkins through which I can host Jenkins in Kubernetes. Now I'm confused if they both are same?</p>
| RohithVallabhaneni | <p>No they are different. I assume the helm chart you found installs and configure Jenkins on Kubernetes - perhaps configured with some agents to run builds. </p>
<p>Jenkins X is a kubernetes native implementation of CI/CD, it uses some components of Jenkins, but has a lot more to it (for example, applications, environments, review apps, deployments and so on) for running apps ON kubernetes in a CI/CD fashion. The Jenkins helm chart likely sets up a single server. </p>
<p>edit: in the time since, Jenkins X has evolved a lot. It is now build using he Tekton engine for pipeline by default, and has many moving parts, so is quite different from running a more classic like Jenkins master/agent setup in a Kubernetes cluster. </p>
| Michael Neale |
<p>Recently, I tried to setup Jenkins X on a kubernetes cluster. However there exists some problem during installation. </p>
<p>There are several options in <code>jx create cluster</code> such as aks(create with AKS), aws(create with AWS), minikube(create with Minikube) and etc.</p>
<p>However there is no option which create a cluster with local kubernetes cluster. I want to setup Jenkins X with my own cluster. </p>
<p>Can I get some advice?</p>
<p>Thanks.</p>
| jwl1993 | <p>when you have your cluster setup such that you can run <code>kubectl</code> commands against it, you can run <code>jx boot</code> to setup your jx installation. You don't need to use <code>jx create cluster</code> as your cluster already exists. </p>
| Michael Neale |
<p>I'm creating a custom resource definition (CRD) with an associated controller using <a href="https://github.com/kubernetes-sigs/kubebuilder" rel="nofollow noreferrer">kubebuilder</a>. My controller reconcile loop creates a deployment sub-resource and parents it to the custom resource using <code> controllerutil.SetControllerReference(&myResource, deployment, r.Scheme)</code>. I've also configured my reconciler so "own" the sub-resource, as follows:</p>
<pre class="lang-golang prettyprint-override"><code>// SetupWithManager sets up the controller with the Manager.
func (r *MyResourceReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&mygroupv1alpha1.MyResource{}).
Owns(&appsv1.Deployment{}).
Complete(r)
}
</code></pre>
<p>However, when I run my controller locally using <code>make run</code>, I noticed that deleting the my CR (the root object) doesn't cause the Deployment sub-resource to get garbage collected. I also noticed that deleting the Deployment sub-resource doesn't trigger my reconciler to run. Why is this? Is there something I'm not doing or is this possibly a limitation of local development/testing?</p>
| Chris Gillum | <p>Using @coderanger's hint, I could see that the <code>metadata.ownerReferences</code> weren't being set correctly when running the following command:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl get deployments sample-deployment -o yaml
</code></pre>
<p>The problem was my controller's reconcile code. I was calling <code>controllerutil.SetControllerReference(&myResource, deployment, r.Scheme)</code> only after I'd already created and persisted the Deployment.</p>
<p><strong>Buggy code</strong></p>
<pre class="lang-golang prettyprint-override"><code>log.Info("Creating a deployment")
deployment := &appsv1.Deployment{
ObjectMeta: metav1.ObjectMeta{
Name: deploymentName,
Namespace: myResource.Namespace,
},
Spec: deploymentSpec,
}
if err = r.Create(ctx, deployment); err != nil {
log.Error(err, "Failed to create deployment")
if errors.IsInvalid(err) {
// Don't retry on validation errors
err = nil
}
return ctrl.Result{}, err
}
// Establish the parent-child relationship between my resource and the deployment
log.Info("Making my resource a parent of the deployment")
if err = controllerutil.SetControllerReference(&myResource, deployment, r.Scheme); err != nil {
log.Error(err, "Failed to set deployment controller reference")
return ctrl.Result{}, err
}
</code></pre>
<p>To fix it, I needed to swap the order of the call to <code>r.Create</code> and <code>controllerutil.SetControllerReference</code>:</p>
<p><strong>Working code</strong></p>
<pre class="lang-golang prettyprint-override"><code>log.Info("Creating a deployment")
deployment := &appsv1.Deployment{
ObjectMeta: metav1.ObjectMeta{
Name: deploymentName,
Namespace: myResource.Namespace,
},
Spec: deploymentSpec,
}
// Establish the parent-child relationship between my resource and the deployment
log.Info("Making my resource a parent of the deployment")
if err = controllerutil.SetControllerReference(&myResource, deployment, r.Scheme); err != nil {
log.Error(err, "Failed to set deployment controller reference")
return ctrl.Result{}, err
}
// Create the deployment with the parent/child relationship configured
if err = r.Create(ctx, deployment); err != nil {
log.Error(err, "Failed to create deployment")
if errors.IsInvalid(err) {
// Don't retry on validation errors
err = nil
}
return ctrl.Result{}, err
}
</code></pre>
<p>I was able to confirm that this worked by looking at the <code>metadata.ownerReferences</code> YAML data for my created deployment (using the command referenced above).</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2021-08-02T16:22:04Z"
generation: 1
name: sample-deployment
namespace: default
ownerReferences:
- apiVersion: resources.mydomain.io/v1alpha1
blockOwnerDeletion: true
controller: true
kind: MyResource
name: myresource-sample
uid: 6ebb146c-afc7-4601-bd75-58efc29beac9
resourceVersion: "569913"
uid: d9a4496f-7418-4831-ab87-4804dcd1f8aa
</code></pre>
| Chris Gillum |
<p>I have deployed Azure Durable Http Triggered Function app in Azure Kubernetes Service. I used Visual Studio Code to create the function app. I have followed instructions from this this <a href="https://dev.to/anirudhgarg_99/scale-up-and-down-a-http-triggered-function-app-in-kubernetes-using-keda-4m42" rel="nofollow noreferrer">article</a> and <a href="https://learn.microsoft.com/en-us/azure/azure-functions/functions-kubernetes-keda#deploying-a-function-app-to-kubernetes" rel="nofollow noreferrer">Microsoft official documentation</a>.</p>
<pre><code>Function runtime: 3.0.2630
Python version: 3.7.7
azure-functions: 1.3.1
azure-functions-durable: 1.0.0b9
</code></pre>
<p>Here is my HttStart <code>function.json</code> file looks like,</p>
<pre><code>{
"scriptFile": "__init__.py",
"bindings": [
{
"authLevel": "anonymous",
"name": "req",
"type": "httpTrigger",
"direction": "in",
"route": "orchestrators/{functionName}",
"methods": [
"post",
"get"
]
},
{
"name": "$return",
"type": "http",
"direction": "out"
},
{
"name": "starter",
"type": "orchestrationClient",
"direction": "in"
}
]
}
</code></pre>
<p>Docker file:</p>
<pre><code>FROM mcr.microsoft.com/azure-functions/python:3.0-python3.7
ENV AzureWebJobsScriptRoot=/home/site/wwwroot \
AzureFunctionsJobHost__Logging__Console__IsEnabled=true
COPY requirements.txt /
RUN pip install -r /requirements.txt
COPY . /home/site/wwwroot
</code></pre>
<p>This application works perfectly in my local environment. After deploying to AKS cluster, when I call URL, it throws the following expection</p>
<pre><code>fail: Function.HelloWorldHttpStart[3]
Executed 'Functions.HelloWorldHttpStart' (Failed, Id=e7dd35a1-2001-4f11-396f-d251cbd87a0d, Duration=82ms)
Microsoft.Azure.WebJobs.Host.FunctionInvocationException: Exception while executing function: Functions.HelloWorldHttpStart
---> System.InvalidOperationException: Webhooks are not configured
at Microsoft.Azure.WebJobs.Extensions.DurableTask.HttpApiHandler.ThrowIfWebhooksNotConfigured() in d:\a\r1\a\azure-functions-durable-extension\src\WebJobs.Extensions.DurableTask\HttpApiHandler.cs:line 737
</code></pre>
<p>Is there any configuration I missed out? Any suggestions would be appreciated.</p>
<p>Thank you.</p>
| Shariful Nibir | <p>Try adding a <code>WEBSITE_HOSTNAME</code> environment variable with <code><ip-address>:<port></code> as the value, where <code><ip-address>:<port></code> refers to the address that can be used to reach your function app from outside.</p>
<p>This error happens when you use an API that depends on this environment variable. When running using the local core tools, this value is set to <code>localhost:7071</code> automatically. When running in the Azure Functions hosted service, this environment variable is also pre-configured to be the DNS name of the function app (e.g. <code>myfunctionapp.azurewebsites.net</code>. For other environments, like AKS, you'll need to add this environment variable explicitly and set it to the correct value for your deployment.</p>
| Chris Gillum |
<p>Stateless is the way to go for services running in pods however i have been trying to move a stateful app which needs to perform session persistence if one pod goes does for resiliency reasons.</p>
<p>In websphere world IHS can be used to keep track of the session and if a node goes down it can be recreated on the live clone. </p>
<p>Is there an industry standard way to handle this issue without having to refactor the applications code by persisting the session using some sidecar pod ?</p>
| Sudheej | <p>Yes. Store the session somewhere. Spring boot supports out of the box MongoDB, Redis, Hazelcast or any JDBC database.</p>
<blockquote>
<p>Spring Boot provides Spring Session auto-configuration for a wide
range of data stores. When building a Servlet web application, the
following stores can be auto-configured:</p>
<p>JDBC Redis Hazelcast MongoDB When building a reactive web application,
the following stores can be auto-configured:</p>
<p>Redis MongoDB If a single Spring Session module is present on the
classpath, Spring Boot uses that store implementation automatically.
If you have more than one implementation, you must choose the
StoreType that you wish to use to store the sessions. For instance, to
use JDBC as the back-end store, you can configure your application as
follows:</p>
<p>spring.session.store-type=jdbc </p>
<p>[Tip] You can disable Spring Session by
setting the store-type to none. Each store has specific additional
settings. For instance, it is possible to customize the name of the
table for the JDBC store, as shown in the following example:</p>
<p>spring.session.jdbc.table-name=SESSIONS </p>
<p>For setting the timeout of the
session you can use the spring.session.timeout property. If that
property is not set, the auto-configuration falls back to the value of
server.servlet.session.timeout.</p>
</blockquote>
| Strelok |
<p>I'm trying to create a simple nginx service on GKE, but I'm running into strange problems. </p>
<p>Nginx runs on port 80 inside the Pod. The service is accessible on port 8080. (This works, I can do <code>curl myservice:8080</code> inside of the pod and see the nginx home screen)</p>
<p>But when I try to make it publicly accessible using an ingress, I'm running into trouble. Here are my deployment, service and ingress files.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 8080
nodePort: 32111
targetPort: 80
type: NodePort
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
rules:
- http:
paths:
# The * is needed so that all traffic gets redirected to nginx
- path: /*
backend:
serviceName: my-service
servicePort: 80
</code></pre>
<p>After a while, this is what my ingress status looks like:</p>
<pre><code>$ k describe ingress test-ingress
Name: test-ingress
Namespace: default
Address: 35.186.255.184
Default backend: default-http-backend:80 (10.44.1.3:8080)
Rules:
Host Path Backends
---- ---- --------
*
/* my-service:32111 (<none>)
Annotations:
backends: {"k8s-be-30030--ecc76c47732c7f90":"HEALTHY"}
forwarding-rule: k8s-fw-default-test-ingress--ecc76c47732c7f90
target-proxy: k8s-tp-default-test-ingress--ecc76c47732c7f90
url-map: k8s-um-default-test-ingress--ecc76c47732c7f90
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ADD 18m loadbalancer-controller default/test-ingress
Normal CREATE 17m loadbalancer-controller ip: 35.186.255.184
Warning Service 1m (x5 over 17m) loadbalancer-controller Could not find nodeport for backend {ServiceName:my-service ServicePort:{Type:0 IntVal:32111 StrVal:}}: could not find matching nodeport from service
Normal Service 1m (x5 over 17m) loadbalancer-controller no user specified default backend, using system default
</code></pre>
<p>I don't understand why it's saying that it can't find nodeport - the service has nodePort defined and it is of type NodePort as well. Going to the actual IP results in <code>default backend - 404</code>. </p>
<p>Any ideas why?</p>
| Nick | <p>The configuration is missing a health check endpoint, for the GKE loadbalancer to know whether the backend is healthy. The <code>containers</code> section for the <code>nginx</code> should also specify:</p>
<pre><code> livenessProbe:
httpGet:
path: /
port: 80
</code></pre>
<p>The <code>GET /</code> on port 80 is the default configuration, and can be changed.</p>
| Eric Platon |
<p>I would like to expose my Kubernetes Managed Digital Ocean (single node) cluster's service on port 80 without the use of Digital Ocean's load balancer. Is this possible? How would I do this? </p>
<p>This is essentially a hobby project (I am beginning with Kubernetes) and just want to keep the cost very low. </p>
| Joseph Horsch | <p>You can deploy an Ingress configured to use the host network and port 80/443.</p>
<ol>
<li><p>DO's firewall for your cluster doesn't have 80/443 inbound open by default.</p>
<p>If you edit the auto-created firewall the rules <a href="https://www.digitalocean.com/community/questions/how-to-customize-firewall-rules-for-managed-kubernetes-service" rel="noreferrer">will eventually reset themselves</a>. The solution is to create a separate firewall also pointing at the same Kubernetes worker nodes:</p>
</li>
</ol>
<pre><code>$ doctl compute firewall create \
--inbound-rules="protocol:tcp,ports:80,address:0.0.0.0/0,address:::/0 protocol:tcp,ports:443,address:0.0.0.0/0,address:::/0" \
--tag-names=k8s:CLUSTER_UUID \
--name=k8s-extra-mycluster
</code></pre>
<p>(Get the <code>CLUSTER_UUID</code> value from the dashboard or the ID column from <code>doctl kubernetes cluster list</code>)</p>
<ol start="2">
<li>Create the <a href="https://kubernetes.github.io/ingress-nginx/" rel="noreferrer">nginx ingress</a> using the host network. I've included the <a href="https://github.com/helm/charts/tree/master/stable/nginx-ingress" rel="noreferrer">helm chart</a> config below, but you could do it via the direct install process too.</li>
</ol>
<p>EDIT: The Helm chart in the above link has been DEPRECATED, Therefore the correct way of installing the chart would be(<a href="https://github.com/kubernetes/ingress-nginx/tree/master/charts/ingress-nginx" rel="noreferrer">as per the new docs</a>) is :</p>
<pre><code>helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
</code></pre>
<p>After this repo is added & updated</p>
<pre><code># For Helm 2
$ helm install stable/nginx-ingress --name=myingress -f myingress.values.yml
# For Helm 3
$ helm install myingress stable/nginx-ingress -f myingress.values.yml
#EDIT: The New way to install in helm 3
helm install myingress ingress-nginx/ingress-nginx -f myingress.values.yaml
</code></pre>
<p><code>myingress.values.yml</code> for the chart:</p>
<pre class="lang-yaml prettyprint-override"><code>---
controller:
kind: DaemonSet
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
daemonset:
useHostPort: true
service:
type: ClusterIP
rbac:
create: true
</code></pre>
<ol start="3">
<li><p>you should be able to access the cluster on :80 and :443 via any worker node IP and it'll route traffic to your ingress.</p>
</li>
<li><p>since node IPs can & do change, look at deploying <a href="https://github.com/kubernetes-incubator/external-dns" rel="noreferrer">external-dns</a> to manage DNS entries to point to your worker nodes. Again, using the helm chart and assuming your DNS domain is hosted by DigitalOcean (though any supported DNS provider will work):</p>
</li>
</ol>
<pre><code># For Helm 2
$ helm install --name=mydns -f mydns.values.yml stable/external-dns
# For Helm 3
$ helm install mydns stable/external-dns -f mydns.values.yml
</code></pre>
<p><code>mydns.values.yml</code> for the chart:</p>
<pre class="lang-yaml prettyprint-override"><code>---
provider: digitalocean
digitalocean:
# create the API token at https://cloud.digitalocean.com/account/api/tokens
# needs read + write
apiToken: "DIGITALOCEAN_API_TOKEN"
domainFilters:
# domains you want external-dns to be able to edit
- example.com
rbac:
create: true
</code></pre>
<ol start="5">
<li>create a Kubernetes <a href="https://kubernetes.github.io/ingress-nginx/user-guide/basic-usage/" rel="noreferrer">Ingress resource</a> to route requests to an existing Kubernetes service:</li>
</ol>
<pre class="lang-yaml prettyprint-override"><code>---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: testing123-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: testing123.example.com # the domain you want associated
http:
paths:
- path: /
backend:
serviceName: testing123-service # existing service
servicePort: 8000 # existing service port
</code></pre>
<ol start="6">
<li>after a minute or so you should see the DNS records appear and be resolvable:</li>
</ol>
<pre><code>$ dig testing123.example.com # should return worker IP address
$ curl -v http://testing123.example.com # should send the request through the Ingress to your backend service
</code></pre>
<p>(Edit: editing the automatically created firewall rules eventually breaks, add a separate firewall instead).</p>
| rcoup |
<p>The data needs to be loaded periodically (like once in a day), and it should be stored in SQL format so that the API can run SQL queries.
We are thinking of loading it from HDFS. Currently we are thinking of using Apache Nifi using PutIgniteCache. </p>
<p>I was thinking probably I can launch a remote Ignite client node and then use IgniteDataStreamer to stream the data, but I was not able to find proper documentation for that. Are there any better ways to do this?</p>
| sidharth padhee | <p>The documentation for Nifi says that <a href="https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-ignite-nar/1.9.0/org.apache.nifi.processors.ignite.cache.PutIgniteCache/index.html" rel="nofollow noreferrer">it uses the data streamer API</a>, so unless you need more control it doesn’t seem like a bad option (with the caveat that I’d never heard of Nifi before much less used it!).</p>
| Stephen Darlington |
<p>Trying Ignite with Kubertenes deployment options. </p>
<ol>
<li>Is it zone-aware? Cannot find any docs & configuration about this.</li>
<li>Can I connect the Kubernetes cluster via an external client? I'd like to connect via <a href="https://apacheignite-net.readme.io" rel="nofollow noreferrer">C# client</a>.</li>
</ol>
<p>Thank you!</p>
| Judith | <ol>
<li>Broadly speaking, Ignite assumes all your nodes are close to one another. You could make it aware of multiple zones by using custom affinity functions, but it's not straight-forward. There are also third-party solutions (I work for GridGain who provide one) that support data centre replication, etc. that would work in this situation</li>
<li>Yes. I would suggest that thick clients should be part of your Kubernetes infrastructure (since they become part of the Ignite cluster topology), but you can connect a thin client just by opening the right ports. You can use Kubernetes load balancing / round robin to connect to any node</li>
</ol>
| Stephen Darlington |
<p>I am trying to extract podname using the below jq query.</p>
<pre class="lang-sh prettyprint-override"><code>❯ kubectl get pods -l app=mssql-primary --output jsonpath='{.items[0].metadata.name}'
mssqlag-primary-deployment-77b8974bb9-dbltl%
❯ kubectl get pods -l app=mssql-primary -o json | jq -r '.items[0].metadata.name'
mssqlag-primary-deployment-77b8974bb9-dbltl
</code></pre>
<p>While they both provide the same out put the first one has a % character at the end of the pod name. Any reason why ? Is there something wrong with the jsonpath representation in the first command ?</p>
| Pradeepl | <p>I'm guessing <code>zsh</code> is your shell. The <code>%</code> is an indicator output by your shell to say that the last line output by <code>kubectl</code> had no newline character at the end. So it's not <em>extra</em> output, it's actually an indicator that the raw <code>kubectl</code> command outputs <em>less</em> than <code>jq</code>.</p>
<p>You could explicitly add a newline to the jsonpath output if you want it:</p>
<pre><code>kubectl get pods -l app=mssql-primary --output jsonpath='{.items[0].metadata.name}{"\n"}'
</code></pre>
<p>Or in the other direction you could tell <code>jq</code> not to add newlines at all by specifying <code>-j</code> instead of <code>-r</code>:</p>
<pre><code>kubectl get pods -l app=mssql-primary -o json | jq -j '.items[0].metadata.name'
</code></pre>
| Weeble |
<p>We are trying to use control.sh to activate out Ignite cluster which is running in Kubernetes and has native persistence as described <a href="https://ignite.apache.org/docs/latest/tools/control-script" rel="nofollow noreferrer">here</a>, however we are getting to error below.</p>
<p>We have also tried activating the cluster via the post install of the auto deployment but are getting the same error.</p>
<pre><code>lifecycle:
postStart:
exec:
command:
- >-
/opt/ignite/apache-ignite/bin/control.sh --set-state ACTIVE
--yes
</code></pre>
<p>Error:</p>
<pre><code>/opt/ignite/apache-ignite/bin/control.sh --set-state ACTIVE
failed - error: command '/bin/sh -c /opt/ignite/apache-ignite/bin/control.sh --set-state ACTIVE' exited with 2: , message: "JVM_OPTS environment variable is set, but will not be used. To pass JVM options use CONTROL_JVM_OPTS
JVM_OPTS=-DIGNITE_WAL_MMAP=false -DIGNITE_UPDATE_NOTIFIER=false -XX:+UseG1GC -Xmx4g -XX:+DisableExplicitGC -server -Xms4g -XX:+AlwaysPreTouch -XX:+ScavengeBeforeFullGC
Control utility [ver. 2.11.1#20211220-sha1:eae1147d]2021 Copyright(C) Apache Software Foundation
User: root
Time: 2022-05-31T18:56:38.690
Connection to cluster failed. Latest topology update failed.
Command [SET-STATE] finished with code: 2
Control utility has completed execution at: 2022-05-31T18:56:41.859
Execution time: 3169 ms
</code></pre>
| RichardFeynman | <p>Activating the cluster relates to the lifecycle of the <em>cluster</em> rather than an individual pod, do you don't want to add it to the pod.</p>
<p>Instead, it's a "manual" process once all your nodes/pods are up. <a href="https://medium.com/@sdarlington/activating-an-apache-ignite-cluster-on-kubernetes-afbed40c7e53" rel="nofollow noreferrer">I wrote about it here</a>.</p>
<p>In short, either run exec:</p>
<pre><code>kubectl exec -it ignite-0 --namespace=ignite -- /opt/ignite/apache-ignite-fabric/bin/control.sh --activate
</code></pre>
<p>Or create a Kubernetes job.</p>
| Stephen Darlington |
<p>Are there any know issues with running the org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder a purely IPv6 environment? I looked <a href="https://ignite.apache.org/docs/latest/clustering/network-configuration" rel="nofollow noreferrer">here</a> and it mentions there may be issues with clusters becoming detached but does not offer any specifics. Any information would be appreciated, thanks.</p>
| RichardFeynman | <p>I'm not aware of any IPv6 problems <em>per se</em>, so if your network is configured correctly I would expect it to work.</p>
<p>The problem we typically see when IPv6 is enabled is that it's possible to route to the IPv4 address but <em>not</em> the IPv6 address -- which is why setting preferIPv4Stack works.</p>
| Stephen Darlington |
<p>We have observed following issue when we deploy Ignite Cluster on Open Shift</p>
<p>We have created respective PV and PVC YAML files.</p>
<p>One more important point is always it points to /ignite/work irrespective of Mount Path.</p>
<p>Error details at POD:
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See <a href="http://www.slf4j.org/codes.html#StaticLoggerBinder" rel="nofollow noreferrer">http://www.slf4j.org/codes.html#StaticLoggerBinder</a> for further details.
class org.apache.ignite.IgniteException: Work directory does not exist and cannot be created: /ignite/work
at org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:1135)
at org.apache.ignite.Ignition.start(Ignition.java:356)
at org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:365)
Caused by: class org.apache.ignite.IgniteCheckedException: Work directory does not exist and cannot be created: /ignite/work
at org.apache.ignite.internal.util.IgniteUtils.workDirectory(IgniteUtils.java:9900)
at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.initializeConfiguration(IgnitionEx.java:1891)
at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1715)
at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1160)
at org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:1054)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:940)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:839)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:709)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:678)
at org.apache.ignite.Ignition.start(Ignition.java:353)
... 1 more
Failed to start grid: Work directory does not exist and cannot be created: /ignite/work</p>
<hr />
<p>YAML Content</p>
<hr />
<p>apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
field.cattle.io/creatorId: user-zqf4l
creationTimestamp: "2021-01-12T06:48:02Z"
finalizers:</p>
<ul>
<li>kubernetes.io/pv-protection
labels:
cattle.io/creator: norman
name: ignite-storage-work-vol
resourceVersion: "18595579"
selfLink: /api/v1/persistentvolumes/newsto
uid: ee81855d-6497-4465-abdd-8244883e383b
spec:
accessModes:</li>
<li>ReadWriteOnce
capacity:
storage: 1Gi
hostPath:
##when you create folder ensure you give proper permission to folder Assing Owner
##chown rootadmin:rootadmin grafana
##give full writes chmod 777 grafana/
path: /opt/work ## Change the location before deploying
type: ""
persistentVolumeReclaimPolicy: Retain
volumeMode: Filesystem</li>
</ul>
<p>.....
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ignite-storage-work-vol-claim
spec:
volumeName: ignite-storage-work-vol
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi</p>
<p>......</p>
<h1>An example of a Kubernetes configuration for pod deployment.</h1>
<p>apiVersion: apps/v1
kind: StatefulSet
metadata:</p>
<h1>Cluster name.</h1>
<p>name: ignite-cluster
namespace: or
spec:</p>
<h1>The initial number of Ignite pods.</h1>
<p>replicas: 2
serviceName: ignite-service
selector:
matchLabels:
app: ignite
template:
metadata:
labels:
app: ignite
spec:
serviceAccountName: ignite
# terminationGracePeriodSeconds: 60000 (use in production for graceful restarts and shutdowns)
containers:
# Custom pod name.
- name: ignite-node
image: apacheignite/ignite:2.13.0
imagePullPolicy: IfNotPresent
env:
- name: OPTION_LIBS
value: ignite-kubernetes,ignite-rest-http
- name: CONFIG_URI
value: file:///ignite/config/ignite-node-cfg.xml
- name: JVM_OPTS
value: "-DIGNITE_WAL_MMAP=false"
# consider this property for production -DIGNITE_WAIT_FOR_BACKUPS_ON_SHUTDOWN=true</p>
<pre><code> ports:
# Ports you might need to open.
- containerPort: 47100 # communication SPI port
- containerPort: 47500 # discovery SPI port
- containerPort: 49112 # JMX port
- containerPort: 10800 # thin clients/JDBC driver port
- containerPort: 8080 # REST API
volumeMounts:
- mountPath: /ignite/config
name: config-vol
- name: work-vol
mountPath: /tmp/work
readOnly: false
- name: storage-vol
mountPath: /tmp/storage
readOnly: false
- name: wal-vol
mountPath: /tmp/wal
readOnly: false
- name: walarchive-vol
mountPath: /tmp/walarchive
readOnly: false
volumes:
- name: config-vol
configMap:
name: ignite-cfg-persistent
- name: work-vol
persistentVolumeClaim:
claimName: ignite-storage-work-vol-claim
- name: storage-vol
persistentVolumeClaim:
claimName: ignite-storage-storage-vol-claim
- name: wal-vol
persistentVolumeClaim:
claimName: ignite-storage-wal-vol-claim
- name: walarchive-vol
persistentVolumeClaim:
claimName: ignite-storage-walarchive-vol-claim
</code></pre>
| Rameish | <p>It's expecting to be able to write to <code>/ignite/work</code> but there's no persistent volume there. You appear to be mounting them in <code>/tmp</code>. Suggest changing:</p>
<pre><code>- name: work-vol
mountPath: /tmp/work
readOnly: false
</code></pre>
<p>To:</p>
<pre><code>- name: work-vol
mountPath: /ignite/work
readOnly: false
</code></pre>
<p>And the same for the other PVs.</p>
| Stephen Darlington |
<p>I am trying to run an ASP.Net docker container on Kubernetes as a non-root user. I have this dockerfile:</p>
<pre><code>FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim AS base
WORKDIR /app
EXPOSE 8443
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
WORKDIR /src
COPY ["MyProgram.API/MyProgram.API.csproj", "MyProgram.API/"]
COPY ["MyProgram.Services/MyProgram.Services.csproj", "MyProgram.Services/"]
COPY ["MyProgram.Core/MyProgram.Core.csproj", "MyProgram.Core/"]
COPY ["MyProgram.Data/MyProgram.Data.csproj", "MyProgram.Data/"]
RUN dotnet restore "MyProgram.API/MyProgram.API.csproj"
COPY . .
WORKDIR "/src/MyProgram.API"
RUN dotnet build "MyProgram.API.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "MyProgram.API.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "MyProgram.API.dll"]
</code></pre>
<p>When I run it locally, I can go to https://localhost:8443 and use my app succesfully. When I deploy it to Kubernetes using this file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
# snip
spec:
securityContext:
fsGroup: 2000
runAsNonRoot: true
runAsUser: 1000
containers:
- name: myprogram
image: mycompany/myprogram:develop
imagePullPolicy: "Always"
env:
- name: "ASPNETCORE_ENVIRONMENT"
value: "Kubernetes"
ports:
- containerPort: 8443
name: "myprogram"
securityContext:
allowPrivilegeEscalation: false
imagePullSecrets:
- name: privatereposecret
---
apiVersion: v1
kind: Service
#snip
spec:
type: NodePort
ports:
- protocol: TCP
port: 8081
targetPort: 8443
nodePort: 31999
selector:
app: myprogram
</code></pre>
<p>My container won't start and gives these gives these log files:</p>
<pre><code>[13:13:30 FTL] Unable to start Kestrel.
System.Net.Sockets.SocketException (13): Permission denied
at System.Net.Sockets.Socket.UpdateStatusAfterSocketErrorAndThrowException(SocketError error, String callerName)
at System.Net.Sockets.Socket.DoBind(EndPoint endPointSnapshot, SocketAddress socketAddress)
at System.Net.Sockets.Socket.Bind(EndPoint localEP)
at Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.SocketConnectionListener.Bind()
at Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.SocketTransportFactory.BindAsync(EndPoint endpoint, CancellationToken cancellationToken)
at Microsoft.AspNetCore.Server.Kestrel.Core.KestrelServer.<>c__DisplayClass21_0`1.<<StartAsync>g__OnBind|0>d.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.AddressBinder.BindEndpointAsync(ListenOptions endpoint, AddressBindContext context)
at Microsoft.AspNetCore.Server.Kestrel.Core.ListenOptions.BindAsync(AddressBindContext context)
at Microsoft.AspNetCore.Server.Kestrel.Core.AnyIPListenOptions.BindAsync(AddressBindContext context)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.AddressBinder.AddressesStrategy.BindAsync(AddressBindContext context)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.AddressBinder.BindAsync(IServerAddressesFeature addresses, KestrelServerOptions serverOptions, ILogger logger, Func`2 createBinding)
at Microsoft.AspNetCore.Server.Kestrel.Core.KestrelServer.StartAsync[TContext](IHttpApplication`1 application, CancellationToken cancellationToken)
[13:13:30 FTL] Application start-up failed
System.Net.Sockets.SocketException (13): Permission denied
at System.Net.Sockets.Socket.UpdateStatusAfterSocketErrorAndThrowException(SocketError error, String callerName)
at System.Net.Sockets.Socket.DoBind(EndPoint endPointSnapshot, SocketAddress socketAddress)
at System.Net.Sockets.Socket.Bind(EndPoint localEP)
at Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.SocketConnectionListener.Bind()
at Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.SocketTransportFactory.BindAsync(EndPoint endpoint, CancellationToken cancellationToken)
at Microsoft.AspNetCore.Server.Kestrel.Core.KestrelServer.<>c__DisplayClass21_0`1.<<StartAsync>g__OnBind|0>d.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.AddressBinder.BindEndpointAsync(ListenOptions endpoint, AddressBindContext context)
at Microsoft.AspNetCore.Server.Kestrel.Core.ListenOptions.BindAsync(AddressBindContext context)
at Microsoft.AspNetCore.Server.Kestrel.Core.AnyIPListenOptions.BindAsync(AddressBindContext context)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.AddressBinder.AddressesStrategy.BindAsync(AddressBindContext context)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.AddressBinder.BindAsync(IServerAddressesFeature addresses, KestrelServerOptions serverOptions, ILogger logger, Func`2 createBinding)
at Microsoft.AspNetCore.Server.Kestrel.Core.KestrelServer.StartAsync[TContext](IHttpApplication`1 application, CancellationToken cancellationToken)
at Microsoft.AspNetCore.Hosting.GenericWebHostService.StartAsync(CancellationToken cancellationToken)
at Microsoft.Extensions.Hosting.Internal.Host.StartAsync(CancellationToken cancellationToken)
at Microsoft.Extensions.Hosting.HostingAbstractionsHostExtensions.RunAsync(IHost host, CancellationToken token)
at Microsoft.Extensions.Hosting.HostingAbstractionsHostExtensions.RunAsync(IHost host, CancellationToken token)
at Microsoft.Extensions.Hosting.HostingAbstractionsHostExtensions.Run(IHost host)
at MyProgram.API.Program.Main(String[] args) in /src/MyProgram.API/Program.cs:line 30
</code></pre>
<p>If I try the exact same deployment without a SecurityContext the container works perfectly. What's going wrong?</p>
| yesman | <p>Kestrel is trying to bind to port 80 and/or port 443 because that's its default unless you tell it otherwise, and you can't do that unless priviledged.</p>
<p>You need to specify the ports, usually via environment variables, and expose them, e.g.</p>
<pre><code># Declare ports above 1024, as an unprivileged non-root user cannot bind to ports <= 1024
ENV ASPNETCORE_URLS http://+:8000;https://+:8443
EXPOSE 8000
EXPOSE 8443
</code></pre>
| blowdart |
<p>I have a Kubernetes setup of 7 Apache Ignite servers and over 100 clients.</p>
<p>With my current Apache Ignite configuration, I am seeing the following line of log for the servers:</p>
<pre><code>java.lang.OutOfMemoryError: Java heap space
</code></pre>
<p>Below is the memory configuration Apache Ignite server:</p>
<ul>
<li>Pod memory limit: 2Gb</li>
<li>Xmx: 768m</li>
</ul>
<p>I would like to know what should be the optimum Memory configuration for the Apache Ignite cluster</p>
| ho wing kent | <p>It depends on what you're trying to do -- persistence and SQL tend to use more heap space for example -- but both 2Gb and 768Mb are <em>much</em> smaller than I'd expect for an in-memory database.</p>
<p>The <a href="https://www.gridgain.com/docs/latest/perf-troubleshooting-guide/memory-tuning" rel="nofollow noreferrer">tuning guide</a> suggests 10Gb as a starting point:</p>
<pre><code>-server
-Xms10g
-Xmx10g
-XX:+AlwaysPreTouch
-XX:+UseG1GC
-XX:+ScavengeBeforeFullGC
-XX:+DisableExplicitGC
</code></pre>
| Stephen Darlington |
<p>I am trying to install Operator Lifecycle Manager (OLM) — a tool to help manage the Operators running on your cluster — from the <a href="https://operatorhub.io/operator/gitlab-runner-operator" rel="nofollow noreferrer">official documentation</a>, but I keep getting the error below. What could possibly be wrong?</p>
<p>This is the result from the command:</p>
<pre class="lang-bash prettyprint-override"><code>curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.22.0/install.sh | bash -s v0.22.0
</code></pre>
<pre class="lang-bash prettyprint-override"><code>/bin/bash: line 2: $'\r': command not found
/bin/bash: line 3: $'\r': command not found
/bin/bash: line 5: $'\r': command not found
: invalid option6: set: -
set: usage: set [-abefhkmnptuvxBCEHPT] [-o option-name] [--] [-] [arg ...]
/bin/bash: line 7: $'\r': command not found
/bin/bash: line 9: $'\r': command not found
/bin/bash: line 60: syntax error: unexpected end of file
</code></pre>
<p>I've tried removing the existing curl and downloaded and installed another version but the issue has still persisted. Most solutions online are for Linux users and they all lead to Windows path settings and files issues.</p>
<p>I haven't found one tackling installing a file using <code>curl</code>.</p>
<p>I'll gladly accept any help.</p>
| Frederico23 | <p>To start off, I need to be clear about a few things:</p>
<ol>
<li>Based on the tags to the question, I see we are in PowerShell rather than a linux/unix or even Windows cmd shell</li>
<li>In spite of this, we are using Unix <code>curl</code> (probably <code>curl.exe</code>), and not the PowerShell alias for <code>Invoke-WebRequest</code>. We know this because of the <code>-sL</code> argument. If Powershell was using the alias, we'd see a completely different error.</li>
</ol>
<p>Next, I need to talk briefly about line endings. Instead of just a single LF (<code>\n</code>) character as seen in Unix/linux and expected by bash, Windows by default uses the two-character LF/CR pair (<code>\n\r</code>) for line endings.</p>
<hr />
<p>With all that background out of the way, I can now explain what's causing the problem. It's this single pipe character:</p>
<pre><code>|
</code></pre>
<p>This a PowerShell pipe, not a Unix pipe, so the operation puts the output of the <code>curl</code> program in the PowerShell pipeline in order to send it to the <code>bash</code> interpreter. Each line is an individual item on the pipeline, and as such no longer includes any original line breaks. PowerShell pipeline will "correct" this before calling bash using the default line ending for the system, which in this case is the LF/CR pair used by Windows. Now when bash tries to interpret the input, it sees an extra <code>\r</code> character after every line and doesn't know what to do with it.</p>
<p>The trick is most of what we might do in Powershell to strip out those extra characters is still gonna get sent through another pipe after we're done. I guess we <em>could</em> tell curl to write the file to disk without ever using a pipe, and then tell bash to run the saved file, but that's awkward, extra work, and much slower.</p>
<hr />
<p>But we can do a little better. PowerShell by default treats each line returned by curl as a separate item on the pipeline. We can "trick" it to instead putting one big item on the pipeline using the <code>-join</code> operation. That will give us one big string that can go on the pipeline as a single element. It will still end up with an extra <code>\r</code> character, but by the time bash sees it the script will have done it's work.</p>
<p>Code to make this work is found in the other answer, and they deserve all the credit for the solution. The purpose of my post is to do a little better job explaining what's going on: why we have a problem, and why the solution works, since I had to read through that answer a couple times to really get it.</p>
| Joel Coehoorn |
<p>I deployed apache ignite cluster in google cloud referring [1], but it given class not found error as follows.</p>
<p>[1]. <a href="https://apacheignite.readme.io/docs/google-cloud-deployment" rel="nofollow noreferrer">https://apacheignite.readme.io/docs/google-cloud-deployment</a></p>
<p>Error message :</p>
<pre><code>2018 Copyright(C) Apache Software Foundation
class org.apache.ignite.IgniteException: Failed to instantiate Spring XML application context (make sure all classes used in Spring configuration are present at CLASSPATH) [springUrl=https://raw.githubusercontent.com/apache/ignite/master/modules/kubernetes/config/example-kube-persistence-and-wal.xml]
at org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:990)
at org.apache.ignite.Ignition.start(Ignition.java:355)
at org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:301)
Caused by: class org.apache.ignite.IgniteCheckedException: Failed to instantiate Spring XML application context (make sure all classes used in Spring configuration are present at CLASSPATH) [springUrl=https://raw.githubusercontent.com/apache/ignite/master/modules/kubernetes/config/example-kube-persistence-and-wal.xml]
at org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.applicationContext(IgniteSpringHelperImpl.java:387)
at org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.loadConfigurations(IgniteSpringHelperImpl.java:104)
at org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.loadConfigurations(IgniteSpringHelperImpl.java:98)
at org.apache.ignite.internal.IgnitionEx.loadConfigurations(IgnitionEx.java:744)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:945)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:854)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:724)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:693)
at org.apache.ignite.Ignition.start(Ignition.java:352)
... 1 more
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.apache.ignite.configuration.IgniteConfiguration#0' defined in URL [https://raw.githubusercontent.com/apache/ignite/master/modules/kubernetes/config/example-kube-persistence-and-wal.xml]: Cannot create inner bean 'org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi#1efee8e7' of type [org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi] while setting bean property 'discoverySpi'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi#1efee8e7' defined in URL [https://raw.githubusercontent.com/apache/ignite/master/modules/kubernetes/config/example-kube-persistence-and-wal.xml]: Cannot create inner bean 'org.apache.ignite.spi.discovery.tcp.ipfinder.kube
rnetes.TcpDiscoveryKubernetesIpFinder#1442d7b5' of type [org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder] while setting bean property
'ipFinder'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.apache.ignite.spi.discovery.tcp.ipfinder.kubernet
es.TcpDiscoveryKubernetesIpFinder#1442d7b5' defined in URL [https://raw.githubusercontent.com/apache/ignite/master/modules/kubernetes/config/example-kube-persistence-and-wal
.xml]: Cannot create inner bean 'org.apache.ignite.kubernetes.configuration.KubernetesConnectionConfiguration#6e2c9341' of type [org.apache.ignite.kubernetes.configuration.K
ubernetesConnectionConfiguration] while setting constructor argument; nested exception is org.springframework.beans.factory.CannotLoadBeanClassException: Cannot find class [org.apache.ignite.kubernetes.configuration.KubernetesConnectionConfiguration] for bean with name 'org.apache.ignite.kubernetes.configuration.KubernetesConnectionConfiguratio
n#6e2c9341' defined in URL [https://raw.githubusercontent.com/apache/ignite/master/modules/kubernetes/config/example-kube-persistence-and-wal.xml]; nested exception is java.
lang.ClassNotFoundException: org.apache.ignite.kubernetes.configuration.KubernetesConnectionConfiguration
at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveInnerBean(BeanDefinitionValueResolver.java:313)
at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:122)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1533)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1280)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:553)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:483)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:312)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:308)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:197)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:761)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:867)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:543)
at org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.applicationContext(IgniteSpringHelperImpl.java:381)
... 9 more
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi#1efee8e7' defined in U
RL [https://raw.githubusercontent.com/apache/ignite/master/modules/kubernetes/config/example-kube-persistence-and-wal.xml]: Cannot create inner bean 'org.apache.ignite.spi.d
iscovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder#1442d7b5' of type [org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder] wh
ile setting bean property 'ipFinder'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.apache.ignite.spi.disco
very.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder#1442d7b5' defined in URL [https://raw.githubusercontent.com/apache/ignite/master/modules/kubernetes/config/exampl
e-kube-persistence-and-wal.xml]: Cannot create inner bean 'org.apache.ignite.kubernetes.configuration.KubernetesConnectionConfiguration#6e2c9341' of type [org.apache.ignite.
kubernetes.configuration.KubernetesConnectionConfiguration] while setting constructor argument; nested exception is org.springframework.beans.factory.CannotLoadBeanClassExce
ption: Cannot find class [org.apache.ignite.kubernetes.configuration.KubernetesConnectionConfiguration] for bean with name 'org.apache.ignite.kubernetes.configuration.Kubern
etesConnectionConfiguration#6e2c9341' defined in URL [https://raw.githubusercontent.com/apache/ignite/master/modules/kubernetes/config/example-kube-persistence-and-wal.xml];
nested exception is java.lang.ClassNotFoundException: org.apache.ignite.kubernetes.configuration.KubernetesConnectionConfiguration
at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveInnerBean(BeanDefinitionValueResolver.java:313)
at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:122)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1533)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1280)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:553)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:483)
at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveInnerBean(BeanDefinitionValueResolver.java:299)
... 22 more
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubern
etesIpFinder#1442d7b5' defined in URL [https://raw.githubusercontent.com/apache/ignite/master/modules/kubernetes/config/example-kube-persistence-and-wal.xml]: Cannot create
inner bean 'org.apache.ignite.kubernetes.configuration.KubernetesConnectionConfiguration#6e2c9341' of type [org.apache.ignite.kubernetes.configuration.KubernetesConnectionCo
nfiguration] while setting constructor argument; nested exception is org.springframework.beans.factory.CannotLoadBeanClassException: Cannot find class [org.apache.ignite.kub
ernetes.configuration.KubernetesConnectionConfiguration] for bean with name 'org.apache.ignite.kubernetes.configuration.KubernetesConnectionConfiguration#6e2c9341' defined i
n URL [https://raw.githubusercontent.com/apache/ignite/master/modules/kubernetes/config/example-kube-persistence-and-wal.xml]; nested exception is java.lang.ClassNotFoundExc
eption: org.apache.ignite.kubernetes.configuration.KubernetesConnectionConfiguration
at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveInnerBean(BeanDefinitionValueResolver.java:313)
at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:122)
at org.springframework.beans.factory.support.ConstructorResolver.resolveConstructorArguments(ConstructorResolver.java:648)
at org.springframework.beans.factory.support.ConstructorResolver.autowireConstructor(ConstructorResolver.java:145)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.autowireConstructor(AbstractAutowireCapableBeanFactory.java:1197)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1099)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:513)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:483)
at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveInnerBean(BeanDefinitionValueResolver.java:299)
... 28 more
Caused by: org.springframework.beans.factory.CannotLoadBeanClassException: Cannot find class [org.apache.ignite.kubernetes.configuration.KubernetesConnectionConfiguration] f
or bean with name 'org.apache.ignite.kubernetes.configuration.KubernetesConnectionConfiguration#6e2c9341' defined in URL [https://raw.githubusercontent.com/apache/ignite/mas
ter/modules/kubernetes/config/example-kube-persistence-and-wal.xml]; nested exception is java.lang.ClassNotFoundException: org.apache.ignite.kubernetes.configuration.Kuberne
tesConnectionConfiguration
at org.springframework.beans.factory.support.AbstractBeanFactory.resolveBeanClass(AbstractBeanFactory.java:1391)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456)
at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveInnerBean(BeanDefinitionValueResolver.java:299)
... 36 more
Caused by: java.lang.ClassNotFoundException: org.apache.ignite.kubernetes.configuration.KubernetesConnectionConfiguration
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at org.springframework.util.ClassUtils.forName(ClassUtils.java:251)
at org.springframework.beans.factory.support.AbstractBeanDefinition.resolveBeanClass(AbstractBeanDefinition.java:401)
at org.springframework.beans.factory.support.AbstractBeanFactory.doResolveBeanClass(AbstractBeanFactory.java:1438)
at org.springframework.beans.factory.support.AbstractBeanFactory.resolveBeanClass(AbstractBeanFactory.java:1383)
... 38 more
Failed to start grid: Failed to instantiate Spring XML application context (make sure all classes used in Spring configuration are present at CLASSPATH) [springUrl=https://r
aw.githubusercontent.com/apache/ignite/master/modules/kubernetes/config/example-kube-persistence-and-wal.xml]
Note! You may use 'USER_LIBS' environment variable to specify your classpath.
</code></pre>
<p>As the error message says no libraries in class path. Is there any issue in docker image or have any issue in deployment?</p>
<p>Thanks in advance.</p>
| Nuwan Sameera | <p>That's the old documentation. The new, updated ones are better: <a href="https://ignite.apache.org/docs/latest/installation/kubernetes/gke-deployment" rel="nofollow noreferrer">https://ignite.apache.org/docs/latest/installation/kubernetes/gke-deployment</a></p>
<p>But, in short, you're missing the <code>ignite-kubernetes</code> module. In your deployment/stateful-set YAML file you need something like this:</p>
<pre><code> containers:
# Custom pod name.
- name: ignite-node
image: apacheignite/ignite:2.10.0
env:
- name: OPTION_LIBS
value: ignite-kubernetes,ignite-rest-http
- name: CONFIG_URI
value: file:///ignite/config/node-configuration.xml
</code></pre>
| Stephen Darlington |
<p>I need to create backups and restore for volumes in aws kubernetes cluster. I was reading about CSI driver in kubernetes docs. Though link below has mentioned the steps but I have few questions</p>
<p><a href="https://kubernetes-csi.github.io/docs/snapshot-restore-feature.html" rel="nofollow noreferrer">https://kubernetes-csi.github.io/docs/snapshot-restore-feature.html</a>
<a href="https://ignite.apache.org/docs/latest/installation/kubernetes/amazon-eks-deployment" rel="nofollow noreferrer">https://ignite.apache.org/docs/latest/installation/kubernetes/amazon-eks-deployment</a></p>
<ol>
<li>Where does it take backups. No s3 location mentioned anywhere</li>
<li>Persistent volume claims, storage classes and persistent volumes are referenced in statefulset.yaml. To enable snapshots and restore from snapshot incase needed. So it create volumes when pod is created.</li>
</ol>
<p>I am unable to understand how can we plugin snapshot.yaml and restore.yaml to create backups and to restore from backup.</p>
<p>Can anyone advise on this please or share link to appropriate documents.</p>
| curious_soul | <p>Don't use the infrastructure to perform a backup. Ignite is a distributed system and its data needs to be consistent across all its nodes. Getting a snapshot of a single volume or even all the volumes connected to a pod is not sufficient.</p>
<p>Instead, try to use the built-in tools. Ignite recently got the <a href="https://ignite.apache.org/docs/latest/snapshots/snapshots" rel="nofollow noreferrer">ability to perform snapshots</a> and GridGain (which is built on Ignite) has <a href="https://www.gridgain.com/docs/8.8.13/administrators-guide/snapshots/snapshots-and-recovery" rel="nofollow noreferrer">had the ability for some time</a>.</p>
| Stephen Darlington |
<p>Copied from here: <a href="https://github.com/kubeflow/pipelines/issues/7608" rel="nofollow noreferrer">https://github.com/kubeflow/pipelines/issues/7608</a></p>
<p>I have a generated code file that runs against Kubeflow. It ran fine on Kubeflow v1, and now I'm moving it to Kubeflow v2. When I do this, I get the following error:
<code>json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)</code></p>
<p>I honestly don't even know where to go next. It feels like something is fundamentally broken for something to fail in the first character, but I can't see it (it's inside the kubeflow execution).</p>
<p>Thanks!</p>
<hr />
<h3>Environment</h3>
<ul>
<li><p>How did you deploy Kubeflow Pipelines (KFP)?
Standard deployment to AWS</p>
</li>
<li><p>KFP version:
1.8.1</p>
</li>
<li><p>KFP SDK version:
1.8.12</p>
</li>
</ul>
<p>Here's the logs:</p>
<pre><code>time="2022-04-26T17:38:09.547Z" level=info msg="capturing logs" argo=true
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead:
https://pip.pypa.io/warnings/venv
[KFP Executor 2022-04-26 17:38:24,691 INFO]: Looking for component `run_info_fn` in --component_module_path `/tmp/tmp.NJW6PWXpIt/ephemeral_component.py`
[KFP Executor 2022-04-26 17:38:24,691 INFO]: Loading KFP component "run_info_fn" from /tmp/tmp.NJW6PWXpIt/ephemeral_component.py (directory "/tmp/tmp.NJW6PWXpIt" and module name "ephemeral_component")
Traceback (most recent call last):
File "/usr/local/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/local/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.7/site-packages/kfp/v2/components/executor_main.py", line 104, in <module>
executor_main()
File "/usr/local/lib/python3.7/site-packages/kfp/v2/components/executor_main.py", line 94, in executor_main
executor_input = json.loads(args.executor_input)
File "/usr/local/lib/python3.7/json/__init__.py", line 348, in loads
return _default_decoder.decode(s)
File "/usr/local/lib/python3.7/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/local/lib/python3.7/json/decoder.py", line 353, in raw_decode
obj, end = self.scan_once(s, idx)
json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)
time="2022-04-26T17:38:24.803Z" level=error msg="cannot save artifact /tmp/outputs/run_info/data" argo=true error="stat /tmp/outputs/run_info/data: no such file or directory"
Error: exit status 1
</code></pre>
<p>Here's the files to repro:
root_pipeline_04d99580c84b47c28405a2c8bcae8703.py</p>
<pre><code>import kfp.v2.components
from kfp.v2.dsl import InputPath
from kubernetes.client.models import V1EnvVar
from kubernetes import client, config
from typing import NamedTuple
from base64 import b64encode
import kfp.v2.dsl as dsl
import kubernetes
import json
import kfp
from run_info import run_info_fn
from same_step_000_ce6494722c474dd3b8bef482bb976557 import same_step_000_ce6494722c474dd3b8bef482bb976557_fn
run_info_comp = kfp.v2.dsl.component(
func=run_info_fn,
packages_to_install=[
"kfp",
"dill",
],
)
same_step_000_ce6494722c474dd3b8bef482bb976557_comp = kfp.v2.dsl.component(
func=same_step_000_ce6494722c474dd3b8bef482bb976557_fn,
base_image="public.ecr.aws/j1r0q0g6/notebooks/notebook-servers/codeserver-python:v1.5.0",
packages_to_install=[
"dill",
"requests",
# TODO: make this a loop
],
)
@kfp.dsl.pipeline(name="root_pipeline_compilation",)
def root(
context: str='', metadata_url: str='',
):
# Generate secrets (if not already created)
secrets_by_env = {}
env_vars = {
}
run_info = run_info_comp(run_id=kfp.dsl.RUN_ID_PLACEHOLDER)
same_step_000_ce6494722c474dd3b8bef482bb976557 = same_step_000_ce6494722c474dd3b8bef482bb976557_comp(
input_context_path="",
run_info=run_info.outputs["run_info"],
metadata_url=metadata_url
)
same_step_000_ce6494722c474dd3b8bef482bb976557.execution_options.caching_strategy.max_cache_staleness = "P0D"
for k in env_vars:
same_step_000_ce6494722c474dd3b8bef482bb976557.add_env_variable(V1EnvVar(name=k, value=env_vars[k]))
</code></pre>
<p>run_info.py</p>
<pre><code>"""
The run_info component fetches metadata about the current pipeline execution
from kubeflow and passes it on to the user code step components.
"""
from typing import NamedTuple
def run_info_fn(
run_id: str,
) -> NamedTuple("RunInfoOutput", [("run_info", str),]):
from base64 import urlsafe_b64encode
from collections import namedtuple
import datetime
import base64
import dill
import kfp
client = kfp.Client(host="http://ml-pipeline:8888")
run_info = client.get_run(run_id=run_id)
run_info_dict = {
"run_id": run_info.run.id,
"name": run_info.run.name,
"created_at": run_info.run.created_at.isoformat(),
"pipeline_id": run_info.run.pipeline_spec.pipeline_id,
}
# Track kubernetes resources associated wth the run.
for r in run_info.run.resource_references:
run_info_dict[f"{r.key.type.lower()}_id"] = r.key.id
# Base64-encoded as value is visible in kubeflow ui.
output = urlsafe_b64encode(dill.dumps(run_info_dict))
return namedtuple("RunInfoOutput", ["run_info"])(
str(output, encoding="ascii")
)
</code></pre>
<p>same_step_000_ce6494722c474dd3b8bef482bb976557.py</p>
<pre><code>import kfp
from kfp.v2.dsl import component, Artifact, Input, InputPath, Output, OutputPath, Dataset, Model
from typing import NamedTuple
def same_step_000_ce6494722c474dd3b8bef482bb976557_fn(
input_context_path: InputPath(str),
output_context_path: OutputPath(str),
run_info: str = "gAR9lC4=",
metadata_url: str = "",
):
from base64 import urlsafe_b64encode, urlsafe_b64decode
from pathlib import Path
import datetime
import requests
import tempfile
import dill
import os
input_context = None
with Path(input_context_path).open("rb") as reader:
input_context = reader.read()
# Helper function for posting metadata to mlflow.
def post_metadata(json):
if metadata_url == "":
return
try:
req = requests.post(metadata_url, json=json)
req.raise_for_status()
except requests.exceptions.HTTPError as err:
print(f"Error posting metadata: {err}")
# Move to writable directory as user might want to do file IO.
# TODO: won't persist across steps, might need support in SDK?
os.chdir(tempfile.mkdtemp())
# Load information about the current experiment run:
run_info = dill.loads(urlsafe_b64decode(run_info))
# Post session context to mlflow.
if len(input_context) > 0:
input_context_str = urlsafe_b64encode(input_context)
post_metadata(
{
"experiment_id": run_info["experiment_id"],
"run_id": run_info["run_id"],
"step_id": "same_step_000",
"metadata_type": "input",
"metadata_value": input_context_str,
"metadata_time": datetime.datetime.now().isoformat(),
}
)
# User code for step, which we run in its own execution frame.
user_code = f"""
import dill
# Load session context into global namespace:
if { len(input_context) } > 0:
dill.load_session("{ input_context_path }")
{dill.loads(urlsafe_b64decode("gASVGAAAAAAAAACMFHByaW50KCJIZWxsbyB3b3JsZCIplC4="))}
# Remove anything from the global namespace that cannot be serialised.
# TODO: this will include things like pandas dataframes, needs sdk support?
_bad_keys = []
_all_keys = list(globals().keys())
for k in _all_keys:
try:
dill.dumps(globals()[k])
except TypeError:
_bad_keys.append(k)
for k in _bad_keys:
del globals()[k]
# Save new session context to disk for the next component:
dill.dump_session("{output_context_path}")
"""
# Runs the user code in a new execution frame. Context from the previous
# component in the run is loaded into the session dynamically, and we run
# with a single globals() namespace to simulate top-level execution.
exec(user_code, globals(), globals())
# Post new session context to mlflow:
with Path(output_context_path).open("rb") as reader:
context = urlsafe_b64encode(reader.read())
post_metadata(
{
"experiment_id": run_info["experiment_id"],
"run_id": run_info["run_id"],
"step_id": "same_step_000",
"metadata_type": "output",
"metadata_value": context,
"metadata_time": datetime.datetime.now().isoformat(),
}
)
</code></pre>
<p>Python file to execute to run:</p>
<pre><code>from sameproject.ops import helpers
from pathlib import Path
import importlib
import kfp
def deploy(compiled_path: Path, root_module_name: str):
with helpers.add_path(str(compiled_path)):
kfp_client = kfp.Client() # only supporting 'kubeflow' namespace
root_module = importlib.import_module(root_module_name)
return kfp_client.create_run_from_pipeline_func(
root_module.root,
arguments={},
)
</code></pre>
| aronchick | <p>Turns out it has to do with not compiling with the right execution mode on.</p>
<p>If you're getting this, your code should look like this.</p>
<pre><code>Compiler(mode=kfp.dsl.PipelineExecutionMode.V2_COMPATIBLE).compile(pipeline_func=root_module.root, package_path=str(package_yaml_path))
</code></pre>
| aronchick |
<p>Is it possible to programatically set the sourcetype to be the namespace from where the logs were generated? I am using the fluentd plugin to send data to the Splunk http event collector. Elsewhere, it was recommended to use ${record['kubernetes']['namespace_name'] to set the index name to be the namespace name. When I do this for sourcetype, that actual text just shows up in Splunk rather than translating to the specific namespace names.</p>
<pre><code>@include systemd.conf
@include kubernetes.conf
<match kubernetes.var.log.containers.fluentd**>
type null
</match>
<match **>
type splunk-http-eventcollector
all_items true
server host:port
token ****
index kubernetes
protocol https
verify false
sourcetype ${record['kubernetes']['namespace_name']
source kubernetes
buffer_type memory
buffer_queue_limit 16
chunk_limit_size 8m
buffer_chunk_limit 150k
flush_interval 5s
</match>
</code></pre>
| trouphaz | <p>If you have not defined a <code>sourcetype</code> in an appropriate <code>props.conf</code> (and associated <code>transforms.conf</code>), Splunk will try to determine the sourcetype based on heuristics</p>
<p>Those heuristics are not generally very accurate on custom data sources</p>
<p>Instead of trying to "programatically set the sourcetype to be the namespace from where the logs were generated", add a field whose contents indicate the namespace from which the logs are generated (eg "namespace")</p>
<p>It's much simpler, extends your logging more efficiently, and doesn't require the definition of scores or hundreds or thousands of individual sourcetypes</p>
| warren |
<p>Is there a way to connect C# Thick Client running in the Windows Machine outside of the Kubernetes with Apache Ignite Cluster nodes are present in the Kubernetes.</p>
<p>Below specified article says it is not possible but this article was written in 2020. We are looking for Scenario-3 from below article
<a href="https://dzone.com/articles/apache-ignite-on-kubernetes-things-to-know-about" rel="nofollow noreferrer">https://dzone.com/articles/apache-ignite-on-kubernetes-things-to-know-about</a>
I hope there might some enhancements for Scenario-3.</p>
<p>We dont want convert our C# Thick client to Thin Client as we are using Data Streamer to insert data in bulk and same functionality is not available with Thin Client.</p>
| Rameish | <p>The recommendation here would be to use the thin-client. The .net thin-client does have the data streamer API.</p>
<p>There is no straight-forward way to connect a thick-client node from outside Kubernetes to a cluster inside it.</p>
| Stephen Darlington |
<p>I'm working locally (within Docker for Mac) on a Kubernetes cluster that will eventually be deployed to the cloud. We plan to use a database service in that environment. To simulate that, I'd like to have the services in the cluster connect to a database running outside the cluster on my laptop.</p>
<p>Can I do that? Here's what I thought I'd try.</p>
<ul>
<li>Define a <code>Service</code> with <code>type: ExternalName</code> and <code>externalName: somedb.local</code></li>
<li>Add <code>127.0.0.1 somedb.local</code> to <code>/etc/hosts</code> on the laptop</li>
</ul>
<p>Is that correct? Is there a better way?</p>
| Nathan Long | <p>After talking with some colleagues, I found a solution.</p>
<p>In Docker for Mac, <code>host.docker.internal</code> points to the host machine, and that lets me connect to the db running there, even from containers running in the K8s cluster.</p>
| Nathan Long |
<p><strong>Background</strong></p>
<p>On the Google Kubernetes Engine we've been using Cloud Endpoints, and the Extensible Service Proxy (v2) for service-to-service authentication.</p>
<p>The services authenticate themselves by including the bearer JWT token in the <code>Authorization</code> header of the HTTP requests.</p>
<p>The identity of the services has been maintained with GCP Service Accounts, and during deployment, the Json Service Account key is mounted to the container at a predefined location, and that location is set as the value of the <code>GOOGLE_APPLICATION_CREDENTIALS</code> env var.</p>
<p>The services are implemented in C# with ASP.NET Core, and to generate the actual JWT token, we use the Google Cloud SDK (<a href="https://github.com/googleapis/google-cloud-dotnet" rel="nofollow noreferrer">https://github.com/googleapis/google-cloud-dotnet</a>, and <a href="https://github.com/googleapis/google-api-dotnet-client" rel="nofollow noreferrer">https://github.com/googleapis/google-api-dotnet-client</a>), where we call the following method:</p>
<pre class="lang-cs prettyprint-override"><code>var credentials = GoogleCredential.GetApplicationDefault();
</code></pre>
<p>If the <code>GOOGLE_APPLICATION_CREDENTIALS</code> is correctly set to the path of the Service Account key, then this returns a <code>ServiceAccountCredential</code> object, on which we can call the <code>GetAccessTokenForRequestAsync()</code> method, which returns the actual JWT token.</p>
<pre class="lang-cs prettyprint-override"><code>var jwtToken = await credentials.GetAccessTokenForRequestAsync("https://other-service.example.com/");
var authHeader = $"Bearer {jwtToken}";
</code></pre>
<p>This process has been working correctly without any issues.</p>
<p>The situation is that we are in the process of migrating from using the manually maintained Service Account keys to using <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity" rel="nofollow noreferrer">Workload Identity</a> instead, and I cannot figure out how to correctly use the Google Cloud SDK to generate the necessary JWT tokens in this case.</p>
<p><strong>The problem</strong></p>
<p>When we enable Workload Identity in the container, and don't mount the Service Account key file, nor set the <code>GOOGLE_APPLICATION_CREDENTIALS</code> env var, then the <code>GoogleCredential.GetApplicationDefault()</code> call returns a <code>ComputeCredential</code> instead of a <code>ServiceAccountCredential</code>.<br />
And if we call the <code>GetAccessTokenForRequestAsync()</code> method, that returns a token which is not in the JWT format.</p>
<p>I checked the implementation, and the token seems to be retrieved from the Metadata server, of which the expected response format seems to be the standard OAuth 2.0 model (represented in <a href="https://github.com/googleapis/google-api-dotnet-client/blob/master/Src/Support/Google.Apis.Auth/OAuth2/Responses/TokenResponse.cs" rel="nofollow noreferrer">this model class</a>):</p>
<pre><code>{
"access_token": "foo",
"id_token": "bar",
"token_type": "Bearer",
...
}
</code></pre>
<p>And the <code>GetAccessTokenForRequestAsync()</code> method returns the value of <code>access_token</code>. But as far as I understand, that's not a JWT token, and indeed when I tried using it to authenticate against ESP, it responded with</p>
<pre><code>{
"code": 16,
"message": "JWT validation failed: Bad JWT format: Invalid JSON in header",
..
}
</code></pre>
<p>As far as I understand, normally the <code>id_token</code> contains the JWT token, which should be accessible via the <code>IdToken</code> property of the <code>TokenResponse</code> object, which is also accessible via the SDK, I tried accessing it like this:</p>
<pre><code>var jwtToken = ((ComputeCredential)creds.UnderlyingCredential).Token.IdToken;
</code></pre>
<p>But this returns <code>null</code>, so apparently the metadata server does not return anything in the <code>id_token</code> field.</p>
<p><strong>Question</strong></p>
<p>What would be the correct way to get the JWT token with the .NET Google Cloud SDK for accessing ESP, when using Workload Identity in GKE?</p>
| Mark Vincze | <p>To get an IdToken for the attached service account, you can use <code>GoogleCredential.GetApplicationDefault().GetOidcTokenAsync(...)</code>.</p>
| Johannes Passing |
<p>In Kubernetes, when a Pod repeatedly crashes and is in <code>CrashLoopBackOff</code> status, it is not possible to shell into the container and poke around to find the problem, due to the fact that containers (unlike VMs) live only as long as the primary process. If I shell into a container and the Pod is restarted, I'm kicked out of the shell.</p>
<p>How can I keep a Pod from crashing so that I can investigate if my primary process is failing to boot properly?</p>
| Nathan Long | <h2>Redefine the <code>command</code></h2>
<p>In <strong>development only</strong>, a temporary hack to keep a Kubernetes pod from crashing is to redefine it and <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/" rel="noreferrer">specify the container's <code>command</code></a> (corresponding to a Docker <code>ENTRYPOINT</code>) and <code>args</code> to be a command that will not crash. For instance:</p>
<pre><code> containers:
- name: something
image: some-image
# `shell -c` evaluates a string as shell input
command: [ "sh", "-c"]
# loop forever, outputting "yo" every 5 seconds
args: ["while true; do echo 'yo' && sleep 5; done;"]
</code></pre>
<p>This allows the container to run and gives you a chance to shell into it, like <code>kubectl exec -it pod/some-pod -- sh</code>, and investigate what may be wrong.</p>
<p>This needs to be undone after debugging so that the container will run the command it's actually meant to run.</p>
<p>Adapted from <a href="https://beanexpert.co.in/troubleshoot-pod-crashloopbackoff-error-kubernetes/" rel="noreferrer">this blog post</a>.</p>
| Nathan Long |
<p>I have a hard requirement to use a single ELB Classic (CLB) load balancer. Can a single ELB Classic (CLB) distribute traffic between two different Auto Scaling Groups, both running the same application code with no special path based routing needed from an ALB (Application Load Balancer).</p>
<p>For example, in a high availability (HA) cluster set-up with KOPS, how does KOPS make it possible to use a single ELB Classic load balancer (as an entry point to the API server) to serve traffic to two different Auto Scaling Groups in different Availability Zones (AZs) each with their own master instances?</p>
<p>Thanks in advance.</p>
| user791134 | <p>A single classis ELB cannot have multiple ASGs associated with it, but the newer Application Load Balancer can do this.</p>
| chris |
<p>I'm following <a href="https://cloud.google.com/python/django/kubernetes-engine" rel="nofollow noreferrer">the tutorial on how to deploy a Django application to the Kubernetes Engine</a> in the Google Cloud Platform and on step 9 it does this:</p>
<blockquote>
<ol start="9">
<li><p>Retrieve the public Docker image for the Cloud SQL proxy.</p>
<pre><code>docker pull b.gcr.io/cloudsql-docker/gce-proxy:1.05
</code></pre></li>
</ol>
</blockquote>
<p>What is this Cloud SQL proxy image? Am I understand it correctly that the application, the web workers, are deployed to images built on top of the Cloud SQL proxy image? Is this so that they can access the database?</p>
<p>Looking at <a href="https://github.com/GoogleCloudPlatform/python-docs-samples/blob/master/container_engine/django_tutorial/polls.yaml" rel="nofollow noreferrer">the yaml file</a> for the application, it looks like the image generated out of the Cloud SQL proxy will be running the application bu then there's another container that is just the cloudsql-docker image. Why is this second container needed?</p>
| Pablo Fernandez | <p>I'm sure some other people that actually understand Kubernetes, Docker and GCP will chip in with better answers, but I wanted to drop in what I've learned so far in case others arrived here with the same question.</p>
<blockquote>
<p>What is this Cloud SQL proxy image?</p>
</blockquote>
<p>That is a Docker image that runs the Cloud SQL proxy which is explained here: <a href="https://cloud.google.com/sql/docs/postgres/sql-proxy" rel="nofollow noreferrer">https://cloud.google.com/sql/docs/postgres/sql-proxy</a></p>
<p>I'm still not 100% sure why this SQL Proxy is used though.</p>
<blockquote>
<p>Am I understand it correctly that the application, the web workers, are deployed to images built on top of the Cloud SQL proxy image?</p>
</blockquote>
<p>That was wrong. This command:</p>
<pre><code>docker build -t gcr.io/<your-project-id>/polls .
</code></pre>
<p>uses the <a href="https://github.com/GoogleCloudPlatform/python-docs-samples/blob/master/container_engine/django_tutorial/Dockerfile" rel="nofollow noreferrer">Dockerfile from the git repo</a> which states <code>gcr.io/google_appengine/python</code> as the base image for the app image.</p>
| Pablo Fernandez |
<p>I was trying to reduce the production of logs in splunk. when i tried to reduce the logs by using below query i was getting a large number of log entries</p>
<pre><code>Spunk query: (source!="/var/log/kubernetes/audit/kube-apiserver-audit.log" cluster_name::dkp-test index="tst-dkp") null sourcetype="fluentd:monitor-agent"
</code></pre>
<p>I was tried to reduce the log production by using <code>emit_interval 300</code> and <code>cache_ttl 3600</code> but it is not worked.</p>
<p>Can anyone please suggest how should i reduce the production of logs in splunk connector. Thanks.</p>
<p>I found that the logs were producing from the node:</p>
<pre><code>[ec2-user@ip-11-21-111-11 ~]$ sudo su
[root@ip-11-21-111-11 ec2-user]# ps -ef | grep flu
root 2536779 2536761 0 Jan31 ? 00:00:16 /usr/bin/fluentd -c /fluentd/etc/fluent.conf
root 2536965 2536779 0 Jan31 ? 00:07:34 /usr/bin/ruby -r/usr/local/share/gems/gems/bundler-2.2.33/lib/bundler/setup -Eascii-8bit:ascii-8bit /usr/bin/fluentd -c /fluentd/etc/fluent.conf --under-supervisor
root 2536978 2536965 0 Jan31 ? 00:00:00 sh -c jq --unbuffered -c '.record.source = "namespace:platform/pod:splunk-connect-splunk-kubernetes-logging-vmkjx" | .record.sourcetype = "fluentd:monitor-agent" | .record.cluster_name = "platform-dkp-test" | .record.splunk_index = "ss-tst-dkp" | .record' 2>&1
root 2536980 2536978 0 Jan31 ? 00:00:02 jq --unbuffered -c .record.source = "namespace:platform/pod:splunk-connect-splunk-kubernetes-logging-vmkjx" | .record.sourcetype = "fluentd:monitor-agent" | .record.cluster_name = "platform-dkp-test" | .record.splunk_index = "ss-tst-dkp" | .record
root 3730152 3730072 0 13:21 pts/0 00:00:00 grep --color=auto flu
</code></pre>
<p>The logs are generating from the splunk are :</p>
<pre><code>{ [-]
emit_records: 0
emit_size: 0
output_plugin: false
plugin_category: filter
plugin_id: object:c11c
retry_count: null
type: jq_transformer
}
Show as raw text
host = ip-11-21-111-11.ec2.internalsource = namespace:platform/pod:splunk-connect-splunk-kubernetes-logging-jxsourcetype = fluentd:monitor-agent
</code></pre>
<p>in this i wanted reduce the logs for <code>Reduce logging for retry_count: null</code></p>
| Sayon | <p>The only way to "reduce the production of logs" going into Splunk is to not log as much that the Universal Forwarder picks up, or that is sent via the HTTP Event Collector</p>
<p>If you want to <em>filter</em> events based on criteria in your SPL, then you need to look at the field(s) in question, and only select what you're looking for</p>
| warren |
<p><a href="https://i.stack.imgur.com/szaBC.png" rel="nofollow noreferrer">enter image description here</a></p>
<p><code>Operation failed. Check pod logs for install-runner for more details.</code></p>
<p>I am getting this error while trying to install GitLab runner.
What I have done so far</p>
<ul>
<li>successfully installed Kubernetes cluster</li>
<li>created a demo project in Gitlab</li>
<li>provided details to GitLab for Kubernetes cluster</li>
</ul>
<p><strong>Then while trying to installing runner it shows failure.</strong>
What am I missing here? [please check the attached image]</p>
| Meshu Deb Nath | <p>Warning, with GitLab 13.11 (April 2021):</p>
<blockquote>
<h2><a href="https://about.gitlab.com/releases/2021/04/22/gitlab-13-11-released/#one-click-gitlab-managed-apps-will-be-removed-in-gitlab-14.0" rel="nofollow noreferrer">One-click GitLab Managed Apps will be removed in GitLab 14.0</a></h2>
<p>We are deprecating one-click install of GitLab Managed Apps.</p>
<p>Although they made it very easy to get started with deploying to Kubernetes from GitLab, the overarching community feedback was that they were not flexible or customizable enough for real-world Kubernetes applications.</p>
<p>Instead, our future direction will focus on <a href="https://docs.gitlab.com/ee/user/clusters/applications.html#install-using-gitlab-cicd-alpha" rel="nofollow noreferrer">installing apps on Kubernetes via GitLab CI/CD</a> in order to provide a better balance between ease-of-use and expansive customization.</p>
<p>We plan to remove one-click Managed Apps completely in GitLab version 14.0.<br />
This will not affect how existing managed applications run inside your cluster, however, you’ll no longer have the ability to modify those applications via the GitLab UI.</p>
<p>We recommend cluster administrators plan to migrate any existing managed applications by reinstalling them either manually or via CI/CD. Migration instructions will be available in our documentation later.</p>
<p>For users of alerts on managed Prometheus, in GitLab version 14.0, we will also remove the ability to setup/modify alerts from the GitLab UI. This change is necessary because the existing solution will no longer function once managed Prometheus is removed.</p>
</blockquote>
<p>Deprecation date: May 22, 2021</p>
| VonC |
<p>I am currently using Kubernetes Python SDK to fetch relevant information from my k8s cluster. I am running this from outside the cluster.</p>
<p>I have a requirement of fetching the images of all the POD's running within a namespace. I did look at Docker python SDK but that requires me to be running the script on the cluster itself which i want to avoid.</p>
<p>Is there a way to get this done ?</p>
<p>TIA</p>
| Rakshith Venkatesh | <blockquote>
<p>that requires me to be running the script on the cluster itself</p>
</blockquote>
<p>No, it should not: the <a href="https://github.com/kubernetes-client/python" rel="nofollow noreferrer">kubernetes-client python</a> performs operations similar to <strong><a href="https://kubernetes.io/docs/reference/kubectl/overview/" rel="nofollow noreferrer"><code>kubectl</code></a></strong> calls (as <a href="https://github.com/kubernetes-client/python/blob/e057f273069de445a2d5a250ac5fe37d79671f3b/examples/notebooks/intro_notebook.ipynb" rel="nofollow noreferrer">detailed here</a>).<br>
And <code>kubectl</code> calls can be done from any client with a properly set <code>.kube/config</code> file.</p>
<p>Once you get the image name from a <code>kubectl describe po/mypod</code>, you might need to docker pull that image locally if you want more (like a docker history).</p>
<p>The <a href="https://stackoverflow.com/users/3219658/raks">OP Raks</a> adds <a href="https://stackoverflow.com/questions/52685119/fetching-docker-image-information-using-python-sdk/52686099?noredirect=1#comment92326426_52686099">in the comments</a>:</p>
<blockquote>
<p>I wanted to know if there is a python client API that actually gives me an option to do docker pull/save/load of an image as such</p>
</blockquote>
<p>The <a href="https://docker-py.readthedocs.io/en/stable/images.html" rel="nofollow noreferrer"><strong>docker-py</strong></a> library can pull/load/save images.</p>
| VonC |
<p>I have a Kubernetes v1.10.2 cluster and a cronjob on it.
The job config is set to:</p>
<pre><code> failedJobsHistoryLimit: 1
successfulJobsHistoryLimit: 3
</code></pre>
<p>But it has created more than ten jobs, which are all successful and not removed automatically.
Now I am trying to delete them manually, with <code>kubectl delete job XXX</code>, but the command timeout as:</p>
<pre><code>$ kubectl delete job XXX
error: timed out waiting for "XXX" to be synced
</code></pre>
<p>I want to know how can I check in such a situation. Is there a log file for the command execution?</p>
<p>I only know the <code>kubectl logs</code> command, but it is not for such a situation.</p>
<p>"kubectl get" shows the job has already finished:</p>
<pre><code>status:
active: 1
completionTime: 2018-08-27T21:20:21Z
conditions:
- lastProbeTime: 2018-08-27T21:20:21Z
lastTransitionTime: 2018-08-27T21:20:21Z
status: "True"
type: Complete
failed: 3
startTime: 2018-08-27T01:00:00Z
succeeded: 1
</code></pre>
<p>and "kubectl describe" output as:</p>
<pre><code>$ kubectl describe job test-elk-xxx-1535331600 -ntest
Name: test-elk-xxx-1535331600
Namespace: test
Selector: controller-uid=863a14e3-a994-11e8-8bd7-fa163e23632f
Labels: controller-uid=863a14e3-a994-11e8-8bd7-fa163e23632f
job-name=test-elk-xxx-1535331600
Annotations: <none>
Controlled By: CronJob/test-elk-xxx
Parallelism: 0
Completions: 1
Start Time: Mon, 27 Aug 2018 01:00:00 +0000
Pods Statuses: 1 Running / 1 Succeeded / 3 Failed
Pod Template:
Labels: controller-uid=863a14e3-a994-11e8-8bd7-fa163e23632f
job-name=test-elk-xxx-1535331600
Containers:
xxx:
Image: test-elk-xxx:18.03-3
Port: <none>
Host Port: <none>
Args:
--config
/etc/elasticsearch-xxx/xxx.yml
/etc/elasticsearch-xxx/actions.yml
Limits:
cpu: 100m
memory: 100Mi
Requests:
cpu: 100m
memory: 100Mi
Environment: <none>
Mounts:
/etc/elasticsearch-xxx from xxx-configs (ro)
Volumes:
xxx-configs:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: test-elk-xxx
Optional: false
Events: <none>
</code></pre>
<p>It indicates still one pod running, but I don't know how to figure out the pod name.</p>
| Michael.Sun | <p>Check if <code>kubectl describe pod <pod name></code> (associated pod of the job) still returns something, which would:</p>
<ul>
<li>mean the node is still there</li>
<li>include the <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-conditions" rel="nofollow noreferrer">pod condition</a></li>
</ul>
<p>In that state, you can then consider <a href="https://kubernetes.io/docs/tasks/run-application/force-delete-stateful-set-pod/#delete-pods" rel="nofollow noreferrer">a force deletion</a>.</p>
| VonC |
<p>I have an application that gets deployed from a docker image to a Kubernetes pod. Inside of my docker image I run the following command</p>
<pre><code>FROM openjdk:17.0.1-slim
USER root
WORKDIR /opt/app
ARG JAR_FILE
ARG INFO_APP_BUILD
RUN apt-get update
RUN apt-get install -y sshpass
RUN apt-get install -y openssh-service
COPY /build/libs/*SNAPSHOT.jar /opt/app/app.jar
ENV INFO_APP_BUILD=${INFO_APP_BUILD}
EXPOSE 8080
CMD java -jar /opt/app/app.jar
</code></pre>
<p>When the application gets deployed, out of my control, the user gets set to a non root user.</p>
<p>Now the important part is that when i try to launch an ssh command i get an error message <code>no user exists for uid [random id here]</code></p>
<p>My goal is to configure the docker image to create a user and grant it permission to use the SSH command.</p>
| wowza_MAN | <blockquote>
<p>When the application gets deployed, out of my control, the user gets set to a non root user.</p>
</blockquote>
<p>Inside the container, the user running <code>java -jar /opt/app/app.jar</code> is root, because of <code>USER root</code>.</p>
<p>Outside the container, on the host, a deployed application is usually (almost exclusively) never executed/accessed as <code>root</code>.</p>
<p>But it should still make ssh request from within the container to a server:</p>
<ul>
<li>the <a href="https://ubuntu.com/server/docs/service-openssh" rel="nofollow noreferrer">openssh service</a> is started</li>
<li>the container /root/.ssh has the right public/private key</li>
<li>the <code>~user/.ssh</code> folder, on the target server where the Docker application is running, has the authorized_keys with the public one in it.</li>
</ul>
<p>But if the user does not exist inside the container, you need to create it on <code>docker run</code>, as <a href="https://unix.stackexchange.com/a/613055/7490">in here</a>:</p>
<pre class="lang-sh prettyprint-override"><code>docker run -it --rm --entrypoint sh "$@" \
-c "[ -x /usr/sbin/useradd ] && useradd -m -u $(id -u) u1 -s /bin/sh || adduser -D -u $(id -u) u1 -s /bin/sh;
exec su - u1"
</code></pre>
| VonC |
<p>I have a simple website running in my Kubernetes cluster and exposed to the Internet using Traefik. My Ingress object looks like this (the only things I've changed here are the name and domain names):</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt-staging
kubernetes.io/ingress.class: traefik
name: my-cool-website
namespace: default
spec:
rules:
- host: my-cool-website.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-cool-website
port:
number: 80
- host: www.my-cool-website.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-cool-website
port:
number: 80
tls:
- hosts:
- my-cool-website.com
- www.my-cool-website.com
secretName: my-cool-website-tls
</code></pre>
<p>This <strong>works</strong>. It allows me to access the site either from <code>my-cool-website.com</code> or from <code>www.my-cool-website.com</code>. But what I'd like to have happen is that if someone visits the former, that Traefik automatically redirects them to the latter. I found a couple of guides online that recommended creating a Traefik middleware, so I did just that, but unfortunately it doesn't work as intended. Here is my middleware definition:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: my-cool-website-force-www
spec:
redirectRegex:
regex: ^https?://my-cool-website.com/(.*)
replacement: https://www.my-cool-website.com/${1}
</code></pre>
<p>And then I add the following annotation back to the ingress object:</p>
<pre class="lang-yaml prettyprint-override"><code>traefik.ingress.kubernetes.io/router.middlewares: my-cool-website-force-www
</code></pre>
<p>But as soon as I do that, it breaks my web app. By that I mean, when that annotation is applied, instead of serving my website, I start seeing a generic nginx page that looks like this when I try to access the domain (and also it does not do the redirect):</p>
<p><a href="https://i.stack.imgur.com/fuiJY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fuiJY.png" alt="Hello World generic Traefik response" /></a></p>
<p>I have to assume this Hello World page is being served by Traefik as some sort of generic default page, as it definitely does not originate from my pod. So that tells me something about this middleware (or perhaps how I'm calling it with the annotation) isn't working. What am I missing?</p>
| soapergem | <p>I figured it out, by port-forwarding to the Traefik Dashboard and looking at the service there. It was showing an error for the middleware not found. I then clicked over to the middlewares and realized that they end up with a longer canonical name. So in my annotation I had to change the reference from <code>my-cool-website-force-www</code> to <code>default-my-cool-website-force-www@kubernetescrd</code> and then everything worked.</p>
| soapergem |
<p>I am using Clair for Vulnerability checks in my harbor.</p>
<p>Services like Clair that have thousands of hosts continually hitting the hosting git server(<a href="https://git.launchpad.net/ubuntu-cve-tracker/" rel="nofollow noreferrer">https://git.launchpad.net/ubuntu-cve-tracker/</a>) saturate the server, and so there are scaling measures in place that causes it to return a 503 error when too many clients are concurrently hitting it.</p>
<p>These are my errors in my Clair pod:</p>
<pre><code>{"Event":"could not pull ubuntu-cve-tracker repository","Level":"error","Location":"ubuntu.go:174",
"Time":"2021-06-25 06:38:32.859806","error":"exit status 128",
"output":"Cloning into '.'...
fatal: unable to access '[https://git.launchpad.net/ubuntu-cve-tracker/':|https://git.launchpad.net/ubuntu-cve-tracker/%27:]
The requested URL returned error: 503\n"}
{"Event":"an error occured when fetching update","Level":"error","Location":"updater.go:246",
"Time":"2021-06-25 06:38:32.859934","error":"could not download requested resource","updater name":"ubuntu"}
</code></pre>
<pre><code>panic: runtime error: slice bounds out of range goroutine 549 [running]: github.com/coreos/clair/ext/vulnsrc/rhel.toFeatureVersions(0xc000208390, 0x2, 0xc000246070, 0x1, 0x1, 0xc0001bc200, 0x1, 0x1, 0x0, 0x908f38, ...) /go/src/github.com/coreos/clair/ext/vulnsrc/rhel/rhel.go:292 +0xc3b github.com/coreos/clair/ext/vulnsrc/rhel.parseRHSA(0x7fcc0f4a24b0, 0xc00038c0f0, 0xc00038c0f0, 0x7fcc0f4a24b0, 0xc00038c0f0, 0x8e2708, 0x4) /go/src/github.com/coreos/clair/ext/vulnsrc/rhel/rhel.go:182 +0x1c8
</code></pre>
<p>As per <a href="https://bugs.launchpad.net/ubuntu-cve-tracker/+bug/1925337" rel="nofollow noreferrer">https://bugs.launchpad.net/ubuntu-cve-tracker/+bug/1925337</a> this is a bug from the git server, and in that post, they are suggesting to get Clair to pull data from other sources instead which means an offline approach. So apart from the offline approach, is there any other way to decrease the number of hits to the git server for Vulnerability checks?</p>
<p>I have tried to control the number of hits to the git server, but nowhere have I found the configuration in Clair.</p>
<p>Does anyone have any idea how we can control the hits for Vulnerability checks or avoid restarts of my pod?</p>
<p>Also, I found schedule a scan(hourly, daily, or weekly) on my harbor UI, But how does scheduling the scan to say daily help?<br />
Is it only at that point it will try to do the git clone to get the latest CVEs?</p>
| Anvesh Muppeda | <p>Check first if this is linked to <a href="https://github.com/goharbor/harbor/issues/14720" rel="nofollow noreferrer"><code>goharbor/harbor</code> issue 14720</a>: "clair restarts repeatedly when there is some issue with vulnerability repos", with logs like</p>
<pre class="lang-golang prettyprint-override"><code>{"Event":"Start fetching vulnerabilities","Level":"info","Location":"ubuntu.go:85","Time":"2021-04-21 19:18:24.446743","package":"Ubuntu"}
...
{"Event":"could not pull ubuntu-cve-tracker repository","Level":"error","Location":"ubuntu.go:174","Time":"2021-04-21 19:18:25.147515","error":"exit status 128","output":"Cloning into '.'...\nfatal: unable to access 'https://git.launchpad.net/ubuntu-cve-tracker/': The requested URL returned error: 503\n"}
{"Event":"an error occured when fetching update","Level":"error","Location":"updater.go:246","Time":"2021-04-21 19:18:25.147607","error":"could not download requested resource","updater name":"ubuntu"}
...
panic: runtime error: slice bounds out of range [25:24]
goroutine 327 [running]:
github.com/quay/clair/v2/ext/vulnsrc/rhel.toFeatureVersions(0xc0065215a8, 0x2, 0xc0000b4f08, 0x1, 0x1, 0xc006ef7aa0, 0x1, 0x1, 0x2, 0xc0000b4ef0, ...)
/go/src/github.com/quay/clair/ext/vulnsrc/rhel/rhel.go:276 +0xbf8
</code></pre>
<p>It refers to <a href="https://github.com/quay/clair/issues/1249#L278" rel="nofollow noreferrer"><code>quay/clair</code> issue 1249</a>, but the harbor case is closed with <a href="https://github.com/goharbor/harbor/pull/15032" rel="nofollow noreferrer">PR 15032</a>, using <code>CLAIRVERSION=v2.1.7</code></p>
| VonC |
<p>I am using docker to run my java war application and when I run the container I got this exception <strong>java.net.BindException: Address already in use</strong> .</p>
<p>The container expose port 8085 (8080->8085/tcp)
I executed this command to run the docker container :</p>
<blockquote>
<p>docker run -p 8080:8085/tcp -d --name=be-app java-app-image:latest</p>
</blockquote>
<p>this is screenshot of the error
<a href="https://i.stack.imgur.com/Hjoc0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Hjoc0.png" alt="enter image description here" /></a></p>
<p>I checked the opened ports inside the container
<a href="https://i.stack.imgur.com/6YIvP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6YIvP.png" alt="enter image description here" /></a></p>
<p>I cannot restart the tomcat inside the container because it will stop , I thought about changing the 8085 port in the server.xml file , but I think that I should change the exposed port also.
Is there any solution to avoid this exception ? ( java.net.BindException: Address already in use)</p>
<p>this is also what I am getting when I run command ps aux
<a href="https://i.stack.imgur.com/xItSj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xItSj.png" alt="enter image description here" /></a></p>
| Y.hadj.younes | <p>The <code>ps</code> shows <em>two</em> java processes, possibly running Tomcat.</p>
<p>Since they would be running with the same parameters, including ports, it seems expected the second process fails with</p>
<pre><code>java.net.BindException: Address already in use
</code></pre>
<p>Make sure to <code>docker stop</code> everything first, and check the status of <code>docker ps --all</code></p>
| VonC |
<p>I have an app running in a kubernetes managed docker container, using Azure Kubernetes Service (AKS). I can output logs to a text file for the app deployment using:</p>
<pre><code>kubectl logs deployment/my-deployment > out.txt
</code></pre>
<p>This gives me a file of around max 28Mb. When I get the docker logs for the same container on the physical VM using <code>docker logs ...</code>, the log file is much bigger (up to 120Mb+).</p>
<p>How can I increase the size of the available <code>kubectl logs</code> for the deployment? If this option is available, then it would likely be an option that increases the size of the available <code>kubectl logs</code> for the <em>pod</em> that holds the container, as the pod and deployment logs are identical.</p>
<p>It's not the docker configuration in <code>/etc/docker/daemon.json</code> that's limiting the <code>kubectl</code> logs, as that's set to 50Mb. I've read that it's the underlying docker configuration that kubernetes uses, but that doesn't seem to be the case, as my<code>kubectl</code> logs are being truncated to around 28Mb.</p>
| Chris Halcrow | <p><a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs" rel="nofollow noreferrer"><code>Kubectl logs</code></a> might read logs with a default log rotation, meaning the <a href="https://stackoverflow.com/a/39398892/6309">logrotate service is active</a>.</p>
<p>Check the content of cat /etc/logrotate.d/docker-containers, as in <a href="https://github.com/kubernetes/kubernetes/issues/11046" rel="nofollow noreferrer">this issue</a>, for confirmation.</p>
<p>As explained in <a href="https://unofficial-kubernetes.readthedocs.io/en/latest/concepts/cluster-administration/logging/#:%7E:text=Kubernetes%20uses%20the%20logrotate%20tool,for%20the%20container%20are%20lost." rel="nofollow noreferrer"><code>unofficial-kubernetes</code></a>:</p>
<blockquote>
<p>An important consideration in node-level logging is implementing log rotation, so that logs don't consume all available storage on the node. Kubernetes uses the <code>logrotate</code> tool to implement log rotation.</p>
<p>Kubernetes performs log rotation daily, or if the log file grows beyond 10MB in size.<br />
Each rotation belongs to a single container; if the container repeatedly fails or the pod is evicted, all previous rotations for the container are lost.<br />
<strong>By default, Kubernetes keeps up to five logging rotations per container</strong>.</p>
</blockquote>
| VonC |
<p>I've placed a docker compose file <strong>project.yaml</strong> at the location /etc/project/project.yaml</p>
<p>the file and well as the project directory have the same file permission, i.e. -rxwrxxrwx
but when I run docker-compose</p>
<pre><code>sudo docker-compose -f ./project.yaml up -d
</code></pre>
<p>if errors out with the following
Cannot find the file ./project.yaml</p>
<p>I have checked various times and it seems there is no permission issue. Can anyone tell why we have this problem and what would be the solution</p>
| Simple Fellow | <p>Beside using the full path, as <a href="https://stackoverflow.com/questions/73500671/docker-compose-cannot-find-the-yaml-file#comment129801726_73500671">commented</a> by <a href="https://stackoverflow.com/users/14312225/quoc9x">quoc9x</a>, double-check your current working directory when you call a command with a relative path <code>./project.yaml</code></p>
<p>If you are not in the right folder, that would explain the error message.</p>
| VonC |
<p>When deploying the app, certain environment-specific settings need to be applied to the server.xml, which cannot be applied when the container is built. Has anyone tried using a volume_mounted config file, and where would I tell tomcat the location of this custom config?</p>
| Mark Jaffe | <p>To illustrate <a href="https://stackoverflow.com/users/19246531/nataraj-medayhal">Nataraj Medayhal</a>, you can find an example based on configMap on <a href="https://github.com/devlinx9/k8s_tomcat_custer" rel="nofollow noreferrer"><code>devlinx9/k8s_tomcat_custer</code></a></p>
<blockquote>
<p>The configMap is used to control the configuration of tomcat, in this we added the cluster configuration, save the following text in a file <code>configmap-tomcat.yaml</code></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ConfigMap
metadata:
name: testconfig
data:
server.xml: |
<?xml version="1.0" encoding="UTF-8"?>
<Server port="8005" shutdown="SHUTDOWN">
<Listener className="org.apache.catalina.startup.VersionLoggerListener" />
<Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" />
...
</Server>
</code></pre>
<p>Create the configMap:</p>
<pre><code>kubectl apply -f configmap-tomcat.yaml -n {namespace}
</code></pre>
</blockquote>
| VonC |
<p>I have started to learn GitOps ArgoCD. I have one basic doubt. I am unable to test ArgoCD because I do not have any Cluster. It will be so kind of you if you can clear my doubts.</p>
<ol>
<li>As an example currently I am running my deployment using <code>test:1</code> docker image. Then using Jenkins I upload <code>test:2</code> and then put <code>test:2</code> in place of <code>test:1</code> then ArgoCD detects the change and applies the new image in a cluster.
But if before I used <code>test:latest</code> then using Jenkins I uploads a new image with same name <code>test:latest</code>. What will happen now? Will ArgoCD deploy the image ( name and tag of the new and previous image are the same )</li>
</ol>
| Arghya Roy | <p>If you need automation, you can consider <a href="https://argocd-image-updater.readthedocs.io/en/latest/" rel="nofollow noreferrer"><strong>Argo CD Image Updater</strong></a>, which does include in its <a href="https://argocd-image-updater.readthedocs.io/en/latest/basics/update-strategies/" rel="nofollow noreferrer">update strategies</a>:</p>
<p><code>latest/newest-build</code> - Update to the most recently built image found in a registry</p>
<blockquote>
<p>It is important to understand, that this strategy will consider the build date of the image, and not the date of when the image was tagged or pushed to the registry.</p>
<p>If you are tagging the same image with multiple tags, these tags will have the same build date.<br />
In this case, Argo CD Image Updater will sort the tag names lexically descending and pick the last tag name of that list.</p>
<p>For example, consider an image that was tagged with the <code>f33bacd</code>, <code>dev</code> and <code>latest</code> tags.<br />
You might want to have the <code>f33bacd</code> tag set for your application, but Image Updater will pick the <code>latest</code> tag name.</p>
</blockquote>
<pre class="lang-yaml prettyprint-override"><code>argocd-image-updater.argoproj.io/image-list: myimage=some/image
argocd-image-updater.argoproj.io/myimage.update-strategy: latest
</code></pre>
| VonC |
<p>I have a spring-boot application running over Kubernetes, Now I am trying to set up a horizontal pod auto scaler.</p>
<p>I have one doubt, without modifying any autoscalar thresholds, does auto scaler check pods only when they are ready(after the readiness probe succeeds) or even when readiness is not complete.</p>
<p>Example</p>
<ul>
<li>A Java app takes 5 mins to start(i.e to complete the readiness probe)</li>
<li>During this 5 mins, CPU for this app with 100% of the CPU requests assigned</li>
<li>HPA is configured to scale if targetCPUUtilization reaches 50%</li>
<li>Now what would happen in this case when the HPA condition is satisfied but the pod is not ready yet? Will it add one more pod right away or it will first wait for pods to be ready and then starts the timer for -<strong>horizontal-pod-autoscaler-initial-readiness-delay ?</strong></li>
</ul>
<p>I am assuming answer lies in this, but not clear to me</p>
<blockquote>
<p>Due to technical constraints, the HorizontalPodAutoscaler controller
cannot exactly determine the first time a pod becomes ready when
determining whether to set aside certain CPU metrics. Instead, it
considers a Pod "not yet ready" if it's unready and transitioned to
unready within a short, configurable window of time since it started.
This value is configured with the
--horizontal-pod-autoscaler-initial-readiness-delay flag, and its default is 30 seconds. Once a pod has become ready, it considers any
transition to ready to be the first if it occurred within a longer,
configurable time since it started. This value is configured with the
--horizontal-pod-autoscaler-cpu-initialization-period flag, and its default is 5 minutes</p>
</blockquote>
<p>Also, can anyone <strong>explain horizontal-pod-autoscaler-cpu-initialization-period</strong> & <strong>horizontal-pod-autoscaler-initial-readiness-delay</strong> ? Documentation is confusing</p>
| Ankit Bansal | <p>A <a href="https://github.com/jthomperoo/predictive-horizontal-pod-autoscaler" rel="nofollow noreferrer">Digital OCean Predictive Horizontal Pod Autoscaler</a> has the same kind of parameter: <a href="https://predictive-horizontal-pod-autoscaler.readthedocs.io/en/latest/reference/configuration/#cpuinitializationperiod" rel="nofollow noreferrer"><code>cpuInitializationPeriod</code></a>.</p>
<p>It rephrases what <code>--horizontal-pod-autoscaler-cpu-initialization-period</code> as:</p>
<blockquote>
<p>the period after pod start when CPU samples might be skipped.</p>
</blockquote>
<p>And for <code>horizontal-pod-autoscaler-initial-readiness-delay</code></p>
<blockquote>
<p>the period after pod start during which readiness changes will be treated as initial readiness.</p>
</blockquote>
<p>The idea is to:</p>
<ul>
<li>not trigger any scaling based on CPU change alone (because the initial <code>cpu-initialization-period</code> means the pod is still being ready, with potential CPU spike)</li>
<li>not trigger any scaling based on readiness state changes (because the initial <code>readiness-delay</code> means, even if the pod reports it is ready, that can change during that delay)</li>
</ul>
<p><a href="https://github.com/kubernetes/website/issues/12657" rel="nofollow noreferrer"><code>kubernetes/website</code> issue 12657</a> has more (mainly to confirm the original documentation is confusing).</p>
| VonC |
<p>I am not able to attach to a container in a pod. Receiving below message
Error from server (Forbidden): pods "sleep-76df4f989c-mqvnb" is forbidden: cannot exec into or attach to a privileged container</p>
<p>Could someone please let me what i am missing?</p>
| chilu | <p>This seems to be a permission (possibly <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">RBAC</a>) issue.<br>
See <a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/" rel="nofollow noreferrer">Kubernetes pod security-policy</a>.</p>
<p>For instance <a href="https://github.com/gluster/gluster-kubernetes/issues/432" rel="nofollow noreferrer"><code>gluster/gluster-kubernetes</code> issue 432</a> points to <a href="https://github.com/Azure/acs-engine/pull/1961" rel="nofollow noreferrer">Azure PR 1961</a>, which disable the <code>cluster-admin</code> rights (although you can <a href="https://github.com/Azure/acs-engine/issues/2200#issuecomment-363070771" rel="nofollow noreferrer">customize/override the admission-controller flags passed to the API server</a>).</p>
<p>So it depends on the nature of your Kubernetes environment.</p>
| VonC |
<p>So I am running Django App in kubernetes pod at when trying to save an image file:</p>
<pre><code>img_obj.image.save(img_file, File(img_file_org))
</code></pre>
<p>I am getting no space left error:</p>
<pre><code> File "/code/ocr_client/management/commands/pdf_to_image.py", line 126, in handle
img_obj.image.save(img_file, File(img_file_org))
File "/opt/conda/lib/python3.7/site-packages/django/db/models/fields/files.py", line 88, in save
self.name = self.storage.save(name, content, max_length=self.field.max_length)
File "/opt/conda/lib/python3.7/site-packages/django/core/files/storage.py", line 54, in save
return self._save(name, content)
File "/opt/conda/lib/python3.7/site-packages/django/core/files/storage.py", line 274, in _save
fd = os.open(full_path, self.OS_OPEN_FLAGS, 0o666)
OSError: [Errno 28] No space left on device: '/code/pca_back_data/media/file1.png'
</code></pre>
<p>I already ran</p>
<pre><code>kubectl exec <my-pod> -- df -ah
</code></pre>
<p>And there is still 20% of the space left (100GB)</p>
<p>I also ran as suggested in other thread:</p>
<pre><code>kubectl exec <my-pod> -- df -hi
</code></pre>
<p>and the usage of inodes was only 5%</p>
<p>I am not sure what else might be the issue here?
Is there some config in Kubernetes that restricts usage of storage for a pod/process?</p>
| Alex T | <p>If you are getting the "<code>No space left on device</code>" error even when the disk usage and inode usage are low, it might be that the disk resources for your specific pod are limited. The Kubernetes system can set limits on resources like CPU, memory, and disk storage.</p>
<p>So start with checking the Kubernetes resource limits and requests for your pod: run <code>kubectl describe pod <my-pod></code> to check if there are resource limits or requests set. Look for something like:</p>
<pre class="lang-yaml prettyprint-override"><code>Resources:
Limits:
ephemeral-storage: 1Gi
Requests:
ephemeral-storage: 500Mi
</code></pre>
<p>The <a href="https://docs.openshift.com/container-platform/4.13/storage/understanding-ephemeral-storage.html" rel="nofollow noreferrer"><code>ephemeral-storage</code></a> represents the storage available for your pod to use. If it is set too low, you might need to adjust it.</p>
<p>Try also to set said resource requests and limits yourself: You can specify the resources available for your pod by adding the following in your pod or deployment configuration with:</p>
<pre class="lang-yaml prettyprint-override"><code>resources:
requests:
ephemeral-storage: "1Gi"
limits:
ephemeral-storage: "2Gi"
</code></pre>
<p>That allows your pod to request 1 GiB of ephemeral storage and limit it to using 2 GiB. Adjust these values as needed based on the size of the images you are dealing with.</p>
<hr />
<p>But another approach would be to consider using Persistent Volumes (PV): If your application needs to store a lot of data (like many large image files), consider using a <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">Persistent Volume (PV) and Persistent Volume Claim (PVC)</a>. PVs represent physical storage in a cluster and can be used to provision durable storage resources. You would need to change your application's code or configuration to write to this PV.</p>
<p>Define a PV and PVC:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: my-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
</code></pre>
<p>And in your pod spec, you would add:</p>
<pre class="lang-yaml prettyprint-override"><code>volumes:
- name: my-storage
persistentVolumeClaim:
claimName: my-pvc
</code></pre>
<p>And mount it into your pod:</p>
<pre class="lang-yaml prettyprint-override"><code>volumeMounts:
- mountPath: "/code/pca_back_data/media"
name: my-storage
</code></pre>
<p>When you create the Persistent Volume (PV) and mount it into your pod at the same location (<code>/code/pca_back_data/media</code>), your application will continue to write to the same directory without needing to change the Django settings.</p>
<p>The only difference is that the storage will now be backed by a Persistent Volume which is designed to handle larger amounts of data and will not be subject to the same restrictions as the pod's ephemeral storage.</p>
<p>In that case, no changes would be required in your Django settings. The application would continue to write to the same path but the underlying storage mechanism would have changed.</p>
<p>However, do note that <code>hostPath</code> should be used only for development or testing. For production, consider using a networked storage like an NFS server, or a cloud provider's storage service.</p>
<hr />
<blockquote>
<p>I am already using PVC that is attached to this pod. And it has more than enough storage. What is stranger is that not all files are failing this… –</p>
</blockquote>
<p>As I commented, it could be a concurrency issue: If multiple processes or threads are trying to write to the same file or location simultaneously, that might cause some operations to fail with "<code>No space left on device</code>" errors.<br />
Also, Although the PVC has enough available space, individual filesystems on the PVC might have quotas that limit how much space they can use. Check if there are any such quotas set on your filesystem.</p>
<p>The OP confirms:</p>
<blockquote>
<p>There is something like this happening - multiple processes are using the same PVC directory, maybe not exactly same file but same parent directory can be accessed by those processes.</p>
</blockquote>
<p>Multiple processes using the same PVC directory or parent directory should generally not be a problem, as long as they are not trying to write to the same file at the same time. But if these processes are creating a large number of files or very large files, and if your PVC or underlying filesystem has a limit on the number of files (inodes) or the total size of files it can handle, that could potentially lead to the "No space left on device" error.</p>
<p>You can check for filesystem quotas on a PVC:</p>
<ul>
<li><p>Connect to your pod: <code>kubectl exec -it <your-pod> -- /bin/bash</code></p>
</li>
<li><p>Install the <code>quota</code> package: This can usually be done with <a href="https://doc.ubuntu-fr.org/quota" rel="nofollow noreferrer"><code>apt-get install quota</code></a> on Debian/Ubuntu systems or <code>yum install quota</code> on CentOS/RHEL systems. If these commands do not work, you may need to look up how to install <code>quota</code> for your specific container's operating system.</p>
</li>
<li><p>Check quotas: Run <code>quota -v</code> to view quota information. If quotas are enabled and you are nearing or at the limit, you will see that here.</p>
</li>
</ul>
<p>If your filesystem does not support quotas or they are not enabled, you will not get useful output from <code>quota -v</code>. In that case, or if you are unable to install the <code>quota</code> package, you might need to check for quotas from outside the pod, which would depend on your Kubernetes setup and cloud provider.</p>
<p>If you are still having trouble, another possible culprit could be a Linux kernel parameter called <a href="https://unix.stackexchange.com/q/444998/7490"><code>fs.inotify.max_user_watches</code></a>, which can limit the number of files the system can monitor for changes. If you are opening and not properly closing a large number of files, you could be hitting this limit. You can check its value with <code>cat /proc/sys/fs/inotify/max_user_watches</code> and increase it if necessary.</p>
<hr />
<p>The OP adds:</p>
<blockquote>
<p>I think the issue in my case is that <code>/tmp</code> folder inside the pod is running out of space (in Django <code>/tmp</code> is used for the files when saving to database if I understand correctly), not sure how to expand size of it?</p>
</blockquote>
<p>Yes, you're correct. Django, like many other systems, uses the <code>/tmp</code> directory to handle temporary files, which includes processing file uploads. If the <code>/tmp</code> directory is running out of space, you can consider the following options:</p>
<ul>
<li>Increase the Pod ephemeral storage limit: as mentioned above, you can adjust the ephemeral storage requests and limits in your pod or deployment configuration, like so:</li>
</ul>
<pre class="lang-yaml prettyprint-override"><code>resources:
requests:
ephemeral-storage: "2Gi" # Request 2Gi of ephemeral storage
limits:
ephemeral-storage: "4Gi" # Limit ephemeral storage usage to 4Gi
</code></pre>
<p>Remember to adjust these values according to your needs.</p>
<ul>
<li>Or use an <code>emptyDir</code> Volume for <code>/tmp</code>: meaning use a Kubernetes <code>emptyDir</code> volume for your <code>/tmp</code> directory. When a Pod is assigned to a Node, Kubernetes will create an <code>emptyDir</code> volume for that Pod, and it will exist as long as that Pod is running on that node. The <code>emptyDir</code> volume can use the node's storage space, and you can specify a size limit.</li>
</ul>
<p>Here is how you might define an <code>emptyDir</code> volume for <code>/tmp</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-container
image: my-image
volumeMounts:
- name: tmp-storage
mountPath: /tmp
volumes:
- name: tmp-storage
emptyDir:
medium: "Memory"
sizeLimit: "2Gi" # Set a size limit for the volume
</code></pre>
<p>The <code>medium: "Memory"</code> means that the <code>emptyDir</code> volume is backed by memory (tmpfs) instead of disk storage. If you remove this line, the <code>emptyDir</code> volume will use disk storage. The <code>sizeLimit</code> is optional.</p>
<ul>
<li>You can also consider using a dedicated PVC for <code>/tmp</code>: If the above options are not feasible or if you need more control over the storage for <code>/tmp</code>, you can also use a dedicated PVC for it, similar to the one you're using for <code>/code/pca_back_data/media</code>.</li>
</ul>
<p>Remember that changes to your pod or deployment configuration need to be applied with <code>kubectl apply -f <configuration-file></code>, and you may need to recreate your pod or deployment for the changes to take effect.</p>
<hr />
<p>The OP concludes in <a href="https://stackoverflow.com/questions/76570871/django-on-kubernetes-pod-no-space-left-on-device/76636516?noredirect=1#comment135345270_76636516">the comments</a>:</p>
<blockquote>
<p>I managed to solve this issue: looks like the GCP storage disc was somehow corrupted and we changed to another and it seems to be fine now.</p>
</blockquote>
| VonC |
<p>I have node.js application that I need to deploy to exising kubernetes cluster.</p>
<p>The cluster is setup using <code>kops</code> on AWS.</p>
<p>I have created <code>.gitlab-ci.yml</code> file for building docker images.</p>
<p>So, whenever a change is pushed to either <code>master</code> or <code>develop</code> branch. It will build the docker image.</p>
<p>I have already followed steps defined <a href="https://docs.gitlab.com/ee/user/project/clusters/add_remove_clusters.html#existing-kubernetes-cluster" rel="nofollow noreferrer">here</a> to add an existing cluster.</p>
<p>Now, I have to roll update to exisitng kubernetes cluster..</p>
<pre><code># This file is a template, and might need editing before it works on your project.
docker-build-master:
# Official docker image.
image: docker:latest
stage: build
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE:prod" .
- docker push "$CI_REGISTRY_IMAGE:prod"
only:
- master
docker-build-dev:
# Official docker image.
image: docker:latest
stage: build
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE:dev" .
- docker push "$CI_REGISTRY_IMAGE:dev"
only:
- develop
</code></pre>
<p>For now, I am using shared runner.</p>
<p><em>How can I integrate kubernetes deployment after image is built with gitlab ci/cd to deploy on aws (cluster is created with kops)?</em></p>
<p><em>For registry I am using gitlab's container registry not docker hub.</em></p>
<p><strong>Update</strong></p>
<p>I changed configuration and doing below,</p>
<pre><code>stages:
- docker-build
- deploy
docker-build-master:
image: docker:latest
stage: docker-build
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE:prod" .
- docker push "$CI_REGISTRY_IMAGE:prod"
only:
- master
deploy-prod:
stage: deploy
image: roffe/kubectl
script:
- kubectl apply -f scheduler-deployment.yaml
only:
- master
docker-build-dev:
image: docker:latest
stage: docker-build
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE:dev" .
- docker push "$CI_REGISTRY_IMAGE:dev"
only:
- develop
</code></pre>
<p>But now I am getting below error.</p>
<pre><code>roffe/kubectl with digest roffe/kubectl@sha256:ba13f8ffc55c83a7ca98a6e1337689fad8a5df418cb160fa1a741c80f42979bf ...
$ kubectl apply -f scheduler-deployment.yaml
error: the path "scheduler-deployment.yaml" does not exist
Cleaning up file based variables
00:01
ERROR: Job failed: exit code 1
</code></pre>
<p>File <code>scheduler-deployment.yaml</code> does exist in the root directory.</p>
| confusedWarrior | <blockquote>
<p>I suggest using Flux for deploying rather than invoking kubectl as part of the pipeline,</p>
</blockquote>
<p>That would be true because:</p>
<ul>
<li><p>the traditional <a href="https://docs.gitlab.com/ee/user/clusters/agent/gitops/agent.html" rel="nofollow noreferrer">GitOps with the agent for Kubernetes</a> has been deprecated with GitLab 16.2 (July 2023), and <a href="https://docs.gitlab.com/ee/user/clusters/agent/gitops.html" rel="nofollow noreferrer">replaced with Flux</a>, as shown <a href="https://docs.gitlab.com/ee/user/clusters/agent/gitops/flux_tutorial.html" rel="nofollow noreferrer">in this tutorial</a>.</p>
</li>
<li><p><a href="https://about.gitlab.com/releases/2023/08/22/gitlab-16-3-released/#flux-sync-status-visualization" rel="nofollow noreferrer">GitLab 16.3</a> (August 2023) adds:</p>
<blockquote>
<h2>Flux sync status visualization</h2>
<p>In previous releases, you probably used <code>kubectl</code> or another third-party tool to check the status of your Flux deployments. From GitLab 16.3, you can check your deployments with the environments UI.</p>
<p>Deployments rely on Flux <code>Kustomization</code> and <code>HelmRelease</code> resources to gather the status of a given environment, which requires a namespace to be configured for the environment. By default, GitLab searches the <code>Kustomization</code> and <code>HelmRelease</code> resources for the name of the project slug. You can customize the name GitLab looks for in the environment settings.</p>
<p><a href="https://i.stack.imgur.com/9hhlp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9hhlp.png" alt="https://about.gitlab.com/images/16_3/flux-badges.png -- Flux sync status visualization" /></a></p>
<p>See <a href="https://docs.gitlab.com/ee/ci/environments/kubernetes_dashboard.html#flux-sync-status" rel="nofollow noreferrer">Documentation</a> and <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/391581" rel="nofollow noreferrer">Issue</a>.</p>
</blockquote>
</li>
</ul>
| VonC |
<p>I have a couple of client applications. For each one I have a build pipeline that gets latest code, compiles it and plots the result to a <code>dist</code> folder (containing only html and js files).</p>
<p>These <code>dist</code> folders are synced, using docker volume, to a web server (<code>nginx</code>) container which actually hosts the client application.</p>
<p>The result is that my client is always "up" and I only need to update the <code>dist</code> folder of any client to deploy it, and never need to mess with the web server container.</p>
<p>However, I want to move my deployment to a different approach, of only building the docker images on pipelines (code change) and using them on demand whenever deploying an environment.</p>
<p>The problem would be how to build the web server container while I don't want to rebuild all clients on any change, nor do I want to store the built output in source control. What would be the best approach?</p>
| Mugen | <p>You could consider a <a href="https://docs.docker.com/develop/develop-images/multistage-build/" rel="nofollow noreferrer">multi-stage build</a> with:</p>
<ul>
<li>the first stage being the build of your web server (which never change, so it is cached)</li>
<li>the second stage being the build of your dist folder, to which image you add the web server of the first stage.</li>
</ul>
<p>The end result is an image with both the web server and the static files to serve (instead of those files being in a volume), with only the static files being rebuilt.</p>
| VonC |
<p>Currently, I have two microservices. I want to send message to KubeMQ Queue from first microservice and want it to be received by Second microservice. I am able to send message to a KubeMQ Queue using below code:</p>
<pre><code>Queue queue = new Queue("QueueName", "ClientID", "localhost:50000");
SendMessageResult resSend = queue.SendQueueMessage(new Message()
.setBody(Converter.ToByteArray("some-simple_queue-queue-message"))
.setMetadata("someMeta"));
if (resSend.getIsError()) {
System.out.printf("Message enqueue error, error: %s", resSend.getError());
}
</code></pre>
<p>I need Listener in the second microservice in order to receive the message from Queue.
Below is code provided by KubeMQ to receive the message:</p>
<pre><code> Queue queue = new Queue("QueueName", "ClientID", "localhost:50000");
ReceiveMessagesResponse resRec = queue.ReceiveQueueMessages(10, 1);
if (resRec.getIsError()) {
System.out.printf("Message dequeue error, error: %s", resRec.getError());
return;
}
System.out.printf("Received Messages %s:", resRec.getMessagesReceived());
for (Message msg : resRec.getMessages()) {
System.out.printf("MessageID: %s, Body:%s", msg.getMessageID(), Converter.FromByteArray(msg.getBody()));
}
</code></pre>
<p>How to configure it in the second microservice to receive message instantly as they are added into the queue?</p>
<p>Please help.</p>
| Abhi | <blockquote>
<p>I need a Listener in the second microservice in order to receive the message from Queue.</p>
</blockquote>
<p>Why polling, when you can be notified through the <a href="https://docs.kubemq.io/learn/message-patterns/pubsub" rel="nofollow noreferrer">KubeMQ Pub/Sub pattern</a>?</p>
<p><a href="https://i.stack.imgur.com/0Y2hd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0Y2hd.png" alt="https://3720647888-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-M2b9dwAGbMPWty0fPGr%2F-M2cg-nL53EVg5rPnOZB%2F-M2chJWo9-utUu1Q8v2C%2Fpubsub.png?alt=media&token=e003b332-df67-4ffb-be81-6b9218cefc4e" /></a></p>
<p>In the context of message queues, "polling" refers to a process where your application continually checks the queue to see if a new message has arrived. This can be inefficient, as it requires your application to make many requests when there may not be any new messages to process.</p>
<p>On the other hand, a "listener" (also known as a subscriber or a callback) is a function that is automatically called when a new message arrives. This is more efficient because your application does not need to continually check the queue; instead, it can wait and react when a message arrives.</p>
<hr />
<p>The Publish-Subscribe pattern (or pub/sub) is a messaging pattern supported by KubeMQ, and it differs slightly from the queue-based pattern you are currently using.<br />
In the pub/sub pattern, senders of messages (publishers) do not program the messages to be sent directly to specific receivers (subscribers). Instead, the programmer “publishes” messages (events), without any knowledge of any subscribers there may be. Similarly, subscribers express interest in one or more events and only receive messages that are of interest, without any knowledge of any publishers.</p>
<p>In this pattern, KubeMQ provides two types of event handling, <code>Events</code> and <code>Events Store</code>.</p>
<ul>
<li><p>The <code>Events</code> type is an asynchronous real-time Pub/Sub pattern, meaning that messages are sent and received in real-time but only if the receiver is currently connected to KubeMQ. There is no message persistence available in this pattern.</p>
</li>
<li><p>The <code>Events Store</code> type, however, is an asynchronous Pub/Sub pattern with persistence. This means that messages are stored and can be replayed by any receiver, even if they were not connected at the time the message was sent.<br />
The system also supports replaying all events from the first stored event, replaying only the last event, or only sending new events.</p>
</li>
</ul>
<p>However, it is important to note that the uniqueness of a client ID is essential when using Events Store.<br />
At any given time, only one receiver can connect with a unique Client ID.<br />
If two receivers try to connect to KubeMQ with the same Client ID, one of them will be rejected. Messages can only be replayed once per Client ID and Subscription type. If a receiver disconnects and reconnects with any subscription type, only new events will be delivered for this specific receiver with that Client ID. To replay messages, a receiver needs to connect with a different Client ID.</p>
<p>Given these features, if you switch your architecture to a pub/sub pattern using the Events Store type, your second microservice could instantly receive messages as they are added into the channel, and even replay old messages if needed. You would need to ensure each microservice has a unique Client ID and manages its subscriptions appropriately.</p>
<p>However, the pub/sub pattern may require changes in the architecture and coding of your microservices, so you would need to evaluate whether this change is suitable for your use case. It is also important to note that the pub/sub pattern, especially with message persistence, may have different performance characteristics and resource requirements compared to the queue pattern.</p>
<hr />
<p>Here is a high-level overview of the classes that are present and their usage:</p>
<ol>
<li><p><code>Channel.java</code>: This class appears to represent a channel for sending events in a publish-subscribe model.</p>
</li>
<li><p><code>ChannelParameters.java</code>: This class defines the parameters for creating a Channel instance.</p>
</li>
<li><p><code>Event.java</code>: This class represents an event that can be sent via a Channel.</p>
</li>
<li><p><code>EventReceive.java</code>: This class is used to process received events.</p>
</li>
<li><p><code>Result.java</code>: This class contains the result of a sent event.</p>
</li>
<li><p><code>Subscriber.java</code>: This class allows you to subscribe to a channel and handle incoming events.</p>
</li>
</ol>
<p>So here is an example of how you might use the existing classes to publish and subscribe to messages.</p>
<pre class="lang-java prettyprint-override"><code>import io.kubemq.sdk.Channel;
import io.kubemq.sdk.ChannelParameters;
import io.kubemq.sdk.Result;
import io.kubemq.sdk.event.Event;
import io.kubemq.sdk.event.Subscriber;
public class KubeMQExample {
public static void main(String[] args) {
try {
// Initialize ChannelParameters
ChannelParameters params = new ChannelParameters();
params.setChannel("your_channel");
params.setClient("your_client_id");
// Initialize a new Channel
Channel channel = new Channel(params);
// Create a new Event
Event event = new Event();
event.setBody("Your message here".getBytes());
// Send the Event
Result sendResult = channel.SendEvent(event);
System.out.println("Event sent, Result: " + sendResult.getIsError());
// Initialize a new Subscriber
Subscriber subscriber = new Subscriber("localhost:5000");
// Subscribe to the Channel
subscriber.SubscribeToEvents(params, (eventReceive) -> {
System.out.println("Received Event: " + new String(eventReceive.getBody()));
});
} catch (Exception e) {
e.printStackTrace();
}
}
}
</code></pre>
<p>Do note that this code is based on the existing SDK and may not reflect the functionality of the original code. You will need to replace "<code>your_channel</code>" and "<code>your_client_id</code>" with your actual channel name and client ID. The event body can also be replaced with the actual message you want to send.</p>
<p>The <code>Subscriber</code> class is used here to listen for and process incoming events. The <code>SubscribeToEvents</code> method takes a <code>ChannelParameters</code> object and a lambda function that processes received events.</p>
<p>Do also note that the <code>Queue</code> and <code>EventsStore</code> classes seem to have been removed from the SDK. The SDK now seems to primarily use the publish-subscribe model, which differs from queue-based communication in that messages are not stored if no consumer is available to consume them.<br />
Events Store was a hybrid model that allowed for persistence in the pub/sub model, storing events that could be replayed by receivers connecting at a later time.</p>
<p>For your original functionality of reading queue messages and peeking at messages in a queue, unfortunately, it does not seem like the current state of the Kubemq Java SDK on the provided GitHub repository supports these actions.</p>
| VonC |
<p>Hi I am at a loss when it comes to configuring OIDC (Keycloak) with MinIO (both are deployed on same AKS cluster). I have configured an Ingress for both, both use the same wildcard cert that I got from DigiCert. However, whenever I got to configure OIDC (either via Helm values or manually in the console) I get the following error:</p>
<pre><code>Get "https://<FQDN_KEYCLOAK>/realms/master/.well-known/openid-configuration": tls: failed to verify certificate: x509: certificate signed by unknown authority
</code></pre>
<p>I tried to manually mount the tls secret into <code>/etc/minio/certs</code> on the MinIO pod but same error. Any suggestions? Thanks</p>
<p><strong>UPDATE</strong>: I get the same exception when integrating Keycloak (OIDC) with ArgoCD.</p>
| user1314147 | <p>Since you have an error "<code>TLS: failed to verify certificate: x509: certificate signed by unknown authority</code>", that should mean that the certificate used by the Keycloak server is not trusted by the system where MinIO and ArgoCD are running.</p>
<p>You should make sure the certificate authority (CA) that signed your certificate is trusted by the MinIO and ArgoCD systems. Since you mentioned that you are using a certificate from DigiCert, it is strange that it is not recognized, as DigiCert is a well-known CA.</p>
<p>First, check that the certificate is correctly installed on the Keycloak server. You can verify the certificate installation using tools like <a href="https://www.openssl.org/" rel="nofollow noreferrer">OpenSSL</a> or <a href="https://www.digicert.com/help/" rel="nofollow noreferrer">online SSL checkers</a>.</p>
<pre class="lang-bash prettyprint-override"><code>openssl s_client -connect <FQDN_KEYCLOAK>:443 -showcerts
</code></pre>
<p>When you run this command, look for the "Certificate chain" section in the output. If you see one, then make sure the certificates for all levels in the chain are present and correctly <a href="https://www.keycloak.org/server/enabletls" rel="nofollow noreferrer">configured in your server configuration, which, in this case, is Keycloak</a>.<br />
The certificates must be concatenated in a single file (<code>https-certificate-file</code>), with the server's certificate first, then the intermediate certificate(s), and finally the root certificate, to form a complete chain.</p>
<p>Update the certificate trust store on the <a href="https://min.io/docs/minio/linux/operations/network-encryption.html" rel="nofollow noreferrer">MinIO</a> and <a href="https://argo-cd.readthedocs.io/en/stable/operator-manual/tls/" rel="nofollow noreferrer">ArgoCD</a> systems to include the CA certificate that signed your Keycloak certificate.</p>
<ul>
<li><p>For MinIO, you need to correctly configure it to trust your CA certificate. You may add your certificate to the system trust store, or configure MinIO to use a custom CA certificate:</p>
<pre><code>mkdir -p /root/.minio/certs/CAs/
cp your-root-ca.crt /root/.minio/certs/CAs/
</code></pre>
<p>Restart the MinIO server after adding the certificates.</p>
</li>
<li><p>Similar to MinIO, for ArgoCD, update the trust store to include your CA certificate.</p>
</li>
</ul>
<p>All this assumes that the FQDN specified in the error message resolves correctly to your Keycloak server and that there are no network issues preventing connectivity.</p>
<p>You can use <code>curl</code> to test the certificate validation manually:</p>
<pre><code>curl -v https://<FQDN_KEYCLOAK>/realms/master/.well-known/openid-configuration --cacert /path/to/your/ca/cert.pem
</code></pre>
<p>If curl command succeeds without certificate errors, it indicates the certificate is correctly installed and trusted.</p>
<hr />
<blockquote>
<p>This is a very strange error, as I have configured both of these services with Keycloak in other K8s environments, it appears to just be this one (running on AKS).</p>
<p>The certs are definitely installed in Keycloak using the Bitnami chart, and that curl command did not return a cert error (both using the above with Keycloak and using the FQDN of argo).</p>
</blockquote>
<p>Since you have successfully configured these services in other Kubernetes environments and are experiencing issues only in this specific AKS cluster, the issue should be specific to this particular AKS environment or the certificates involved. The successful <code>curl</code> command test is a strong indicator that the certificates on the Keycloak server are correctly installed and accessible from within the cluster.</p>
<p>So check aks-specific configurations: AKS might have some specific security policies or networking configurations that affect TLS verification. Confirm that there are no such configurations that may be interfering with the connection to Keycloak.</p>
<p>And review mounted certificates in pods: Inspect MinIO and ArgoCD pods to see whether the CA certificates are correctly mounted and accessible.</p>
<pre class="lang-bash prettyprint-override"><code>kubectl exec -it <pod_name> -n <namespace> -- /bin/sh
ls /etc/minio/certs/CAs/ # or corresponding path
</code></pre>
<p>Make sure that there are no Kubernetes network policies or Azure-specific firewall rules that may be interfering with the certificate verification process.<br />
For instance, use <code>curl</code> from inside other pods in the cluster to rule out any network issues specific to MinIO or ArgoCD.</p>
<pre class="lang-bash prettyprint-override"><code>kubectl run --rm -i --tty --image=busybox testpod -- sh
wget --ca-cert=/path/to/ca.crt https://<FQDN_KEYCLOAK>/realms/master/.well-known/openid-configuration
</code></pre>
<hr />
<p>The <a href="https://stackoverflow.com/users/1314147/user1314147">OP</a> adds in <a href="https://stackoverflow.com/questions/77092098/minio-openid-keycloak-on-aks-failed-to-verify-tls/77141650#comment136118029_77141650">the comments</a>:</p>
<blockquote>
<p>I ended up figuring it out, it was due to 2 issues:</p>
<ul>
<li>I was trying to use one wildcar cert for <code>auth.<BASE_URL></code> and <code>argo.<BASE_URL></code>, switched to using letsencrypt and</li>
<li>I needed to concert the certs to PEM format and mount to <code>/opt/bitnami/keycloak/certs</code></li>
</ul>
</blockquote>
<p>True:</p>
<h2>Use of a single wildcard certificate:</h2>
<ul>
<li><p><a href="https://en.wikipedia.org/wiki/Wildcard_certificate" rel="nofollow noreferrer">Wildcard certificates</a> can be convenient for securing multiple subdomains under a single base domain.<br />
However, depending on the issuing Certificate Authority (CA) and the applications involved, wildcard certificates may not always work seamlessly. And using a single certificate for multiple services can sometimes introduce complexities or limitations.</p>
</li>
<li><p>By opting for Let's Encrypt, you could easily issue separate certificates for each subdomain (<code>auth.<BASE_URL></code> and <code>argo.<BASE_URL></code>), isolating each service's SSL configuration.<br />
That is generally easier to manage and troubleshoot. Let's Encrypt's automation features can also streamline the renewal process, further reducing maintenance overhead.</p>
</li>
</ul>
<h2>Conversion to PEM format:</h2>
<p>Certificates can come in <a href="https://en.wikipedia.org/wiki/X.509#Certificate_filename_extensions" rel="nofollow noreferrer">various formats</a> like DER, PEM, PFX, etc. The applications involved (in this case, Bitnami's Keycloak chart) might expect the certificates to be in a particular format.</p>
<ul>
<li><p>PEM (Privacy-Enhanced Mail) is one of the most commonly used certificate formats, recognizable by the <code>-----BEGIN CERTIFICATE-----</code> and <code>-----END CERTIFICATE-----</code> delimiters. Not all software can read all formats, so converting to PEM is often necessary for compatibility.</p>
</li>
<li><p>Mounting to <code>/opt/bitnami/keycloak/certs</code>: That is specific to the <a href="https://github.com/bitnami/charts/tree/main/bitnami/keycloak" rel="nofollow noreferrer">Bitnami Keycloak Helm chart</a>. Helm charts usually define specific paths where they expect to find configurations or certificates. You needed to make sure the PEM-formatted certificates were placed in this specific directory for Keycloak to use them properly.</p>
</li>
</ul>
| VonC |
<p>I tried following the instructions from the official documentation page of "<strong>Operator SDK</strong>". The device I'm trying to install it on is running on Windows 10, AMD64. I installed GNU Make via Chocolatey.</p>
<pre><code> make --version
GNU Make 4.4.1
Built for Windows32
Copyright (C) 1988-2023 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
</code></pre>
<p>As per the "<a href="https://sdk.operatorframework.io/docs/installation/#compile-and-install-from-master" rel="nofollow noreferrer">Command and install from master</a>" section of the official documentation:</p>
<pre><code>git clone https://github.com/operator-framework/operator-sdk
cd operator-sdk
git checkout master
make install
</code></pre>
<p>The last line "make install" fails:</p>
<pre><code>make install
go install -gcflags "all=-trimpath=C:/Users/erjan/Downloads/operator_k8s_sdk" -asmflags "all=-trimpath=C:/Users/erjan/Downloads/operator_k8s_sdk" -ldflags " -X 'github.com/operator-framework/operator-sdk/internal/version.Version=v1.31.0+git' -X 'github.com/operator-framework/operator-sdk/internal/version.GitVersion=v1.31.0-3-gd21ed649' -X 'github.com/operator-framework/operator-sdk/internal/version.GitCommit=d21ed6499ebfc8ecdb4508e1c2a2a0cfd2a151f3' -X 'github.com/operator-framework/operator-sdk/internal/version.KubernetesVersion=v1.26.0' -X 'github.com/operator-framework/operator-sdk/internal/version.ImageVersion=v1.31.0' " ./cmd/# github.com/containerd/containerd/archive
..\..\..\go\pkg\mod\github.com\containerd\[email protected]\archive\tar_windows.go:234:19: cannot use syscall.NsecToFiletime(hdr.AccessTime.UnixNano()) (value of type syscall.Filetime) as "golang.org/x/sys/windows".Filetime value in struct literal
..\..\..\go\pkg\mod\github.com\containerd\[email protected]\archive\tar_windows.go:235:19: cannot use syscall.NsecToFiletime(hdr.ModTime.UnixNano()) (value of type syscall.Filetime) as "golang.org/x/sys/windows".Filetime value in struct literal
..\..\..\go\pkg\mod\github.com\containerd\[email protected]\archive\tar_windows.go:236:19: cannot use syscall.NsecToFiletime(hdr.ChangeTime.UnixNano()) (value of type syscall.Filetime) as "golang.org/x/sys/windows".Filetime value in struct literal
..\..\..\go\pkg\mod\github.com\containerd\[email protected]\archive\tar_windows.go:239:17: cannot use syscall.NsecToFiletime(hdr.ModTime.UnixNano()) (value of type syscall.Filetime) as "golang.org/x/sys/windows".Filetime value in struct literal
..\..\..\go\pkg\mod\github.com\containerd\[email protected]\archive\tar_windows.go:257:27: cannot use syscall.NsecToFiletime(createTime.UnixNano()) (value of type syscall.Filetime) as "golang.org/x/sys/windows".Filetime value in assignment
make: *** [Makefile:75: install] Error 1
</code></pre>
<p>What could be the reason for this error?</p>
| ERJAN | <p>I see there are several lines where Go is complaining about a type mismatch. Specifically, it is mentioning that <code>syscall.Filetime</code> cannot be used as <code>"golang.org/x/sys/windows".Filetime</code>.</p>
<pre><code>..\..\..\go\pkg\mod\github.com\containerd\[email protected]\archive\tar_windows.go:234:19: cannot use syscall.NsecToFiletime(hdr.AccessTime.UnixNano()) (value of type syscall.Filetime) as "golang.org/x/sys/windows".Filetime value in struct literal
</code></pre>
<p>A value of type <code>syscall.Filetime</code> is being used where a value of type <code>"golang.org/x/sys/windows".Filetime</code> is expected. This is a type mismatch issue. The file <code>tar_windows.go</code> within the <code>github.com/containerd/containerd</code> package seems to be the source of these errors, and it appears to be related to how file timestamps are being handled on Windows.</p>
<p><a href="https://github.com/containerd/containerd/releases/tag/v1.4.11" rel="nofollow noreferrer"><code>containerd/containerd</code> v1.4.11</a> seems quite old, considering the <a href="https://github.com/operator-framework/operator-sdk/blob/d21ed6499ebfc8ecdb4508e1c2a2a0cfd2a151f3/go.mod#L72" rel="nofollow noreferrer">operator-framework/operator-sdk</a> project itself needs <a href="https://github.com/containerd/containerd/releases/tag/v1.7.0" rel="nofollow noreferrer"><code>containerd</code> v1.7.0</a></p>
<p>Since I got the same error through <code>make install</code>, I tried first <code>go build -a -v ...</code>, which does not trigger errors.
The Makefile fails on <code>go install ./cmd/{operator-sdk,helm-operator}</code></p>
<ul>
<li><code>go install ./cmd/helm-operator</code> works.</li>
<li><code>go install ./cmd/operator-sdk</code> has the error</li>
</ul>
<p>That bug is confirmed with your issue <a href="https://github.com/operator-framework/operator-sdk/issues/6585" rel="nofollow noreferrer"><code>operator-framework/operator-sdk</code> issue 6585</a></p>
<blockquote>
<p>We do not officially support or build binaries for windows. But we are open to receiving any help with contributions.<br />
Duplicate of <a href="https://github.com/operator-framework/operator-sdk/issues/6586" rel="nofollow noreferrer">#6586</a></p>
<p>Operator SDK <a href="https://sdk.operatorframework.io/docs/overview/#platform-support" rel="nofollow noreferrer">doesn't officially support or build binaries for Windows</a>.</p>
<p>However, there have been instances where users could still build SDK binary from master on their windows machines. This error seems to be coming from <code>github.com\containerd\[email protected]</code>.<br />
Looks like it may have an issue. Also, we have explicitly pinned <code>containerd</code> to 1.4.11:</p>
<p><a href="https://github.com/operator-framework/operator-sdk/blob/d21ed6499ebfc8ecdb4508e1c2a2a0cfd2a151f3/go.mod#L243" rel="nofollow noreferrer">operator-sdk/go.mod</a>:</p>
<pre><code>github.com/containerd/containerd => github.com/containerd/containerd v1.4.11
</code></pre>
<p>because a bump in it breaks github.com/deislabs/oras.</p>
<p>I would suggest to start by checking if a bump in dependencies fixes these issues without breaking anything else in SDK. If so, we can merge that in master to fix it. Unfortunately, supporting windows builds has not been in our roadmap, but if you would like to try it out, we would appreciate any contributions.</p>
</blockquote>
| VonC |
<p>i am trying to create a Kubernetes cluster with the intention of hosting a docker registry, but after installing kubectl (via homebrew on Mac) along with minikube i am getting <code>The connection to the server localhost:8080 was refused - did you specify the right host or port?</code> when i run <code>kubectl version</code> or any other commands. I have previously used the docker desktop app with Kubernetes so don't know if there is any config i need to replace?</p>
<p>I have discovered there is no context set in the kubectl config but if i run <code>kubectl config get-contexts</code> there is nothing there.</p>
| Tom Williams | <p><a href="https://discuss.kubernetes.io/t/the-connection-to-the-server-localhost-8080-was-refused-did-you-specify-the-right-host-or-port/1464/9" rel="nofollow noreferrer">This thread</a> mentions:</p>
<blockquote>
<p>That error should only come up if you have no contexts configured in your client.<br />
If you run <code>kubectl config view</code> and you get something like this:</p>
<pre><code>$ kubectl config view
apiVersion: v1
clusters: []
contexts: []
current-context: ""
kind: Config
preferences: {}
users: []
</code></pre>
<p>Then no contexts are configured.</p>
</blockquote>
<p>And:</p>
<blockquote>
<p>Getting <code>kubectl</code> to run really depends on how you installed it.<br />
Basically, if you install and have a proper config file, it should always work.</p>
<p>So, either an old file from a previous installation is there or something silly like that (although usually difficult to spot).</p>
<p>Also, make sure the commands don’t fail (some on the post pasted that the step to copy the <code>kubectl config</code> failed). That is the way to authorize to the cluster, so it won’t never work if that step doesn’t work</p>
</blockquote>
<p>Example of possible resolution:</p>
<pre class="lang-bash prettyprint-override"><code>mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=$HOME/.kube/config
kubectl get nodes
</code></pre>
<p>(From "<a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#more-information" rel="nofollow noreferrer">Creating a cluster with <code>kubeadm</code></a>")</p>
| VonC |
<p>I'm facing an issue with <a href="https://github.com/oauth2-proxy/manifests/tree/main/helm/oauth2-proxy" rel="nofollow noreferrer">oauth2 proxy</a> and Ingress Nginx (with the latest versions) in a Kubernetes cluster where the <code>X-Auth-Request</code> headers are not being passed through to the client during the standard oauth authentication flow. I'm specifically using Azure as the auth provider.</p>
<p>Here's the relevant portion of my oauth Proxy configuration:</p>
<pre><code>pass_access_token = true
pass_authorization_header = true
pass_user_headers = true
set_xauthrequest = true
</code></pre>
<p>When I explicitly call <code>/oauth2/auth</code>, I get the headers as expected. However, during the standard OAuth2 auth flow, none of the headers are returned with any request.</p>
<p>This situation is somewhat similar to another question here: <a href="https://stackoverflow.com/questions/64666156/oauth2-proxy-do-not-pass-x-auth-request-groups-header">Oauth2-Proxy do not pass X-Auth-Request-Groups header</a>, but in my case, I'm not receiving any of the <code>X-Auth-Request</code> headers, except when I call <code>/oauth2/auth</code> directly.</p>
<p>I've also tried adding the following snippet to my application Ingress configuration with no luck:</p>
<pre><code>nginx.ingress.kubernetes.io/configuration-snippet: |
auth_request_set $email $upstream_http_x_auth_request_email;
access_by_lua_block {
if ngx.var.email ~= "" then
ngx.req.set_header("X-Auth-Request-Email", ngx.var.email)
end
}
</code></pre>
<p>I've gone through multiple configurations, read numerous blog posts, and scoured GitHub issues, but haven't been able to resolve this issue. Does anyone have any insights into what could be causing this behavior?</p>
| Daniel Taub | <p>You do have a <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#the-ingress-resource" rel="nofollow noreferrer">Kubernetes Ingress resource</a> that manages external access to the services in your cluster. That is typically defined in a YAML file and applied to your Kubernetes cluster using <code>kubectl apply -f <filename.yaml></code>.</p>
<p>Something like (mentioned for other readers):</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
# annotations go here
spec:
rules:
- host: myapp.mydomain.com
http:
paths:
- backend:
service:
name: my-service
port:
number: 80
</code></pre>
<p>In the <code>annotations</code> section, you can specify various settings that the Nginx Ingress Controller should apply. I would suggest, from the <a href="https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/annotations.md#external-authentication" rel="nofollow noreferrer">kubernetes/ingress-nginx annotations External Authentication</a>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$escaped_request_uri"
nginx.ingress.kubernetes.io/auth-response-headers: "x-auth-request-user, x-auth-request-groups, x-auth-request-email"
spec:
rules:
- host: myapp.mydomain.com
http:
paths:
- backend:
service:
name: my-service
port:
number: 80
</code></pre>
<p>(And <code>kubectl apply -f <your-ingress-config>.yaml</code>)</p>
<p>That would explicitly tell the Ingress to pick these headers from the authentication response and pass them to the upstream application.</p>
<p>Doing this updates the Ingress resource in your Kubernetes cluster and subsequently should update the Nginx Ingress Controller's configuration. After applying, give it some time to propagate, and then you can check if the <code>X-Auth-Request</code> headers are being passed as you expect.</p>
<p>If not, and if nothing is obvious in <code>kubectl logs <nginx-ingress-pod></code> output, check the OAuth2 Proxy logs (<code>kubectl logs <oauth2-proxy-pod></code>) to see if the headers are generated as expected (because if there are not... no amount of Lua script would change the end result).</p>
| VonC |
<p>I was trying to create a namespace using <code>kubectl</code>, but I got this error:</p>
<blockquote>
<p>Error from server (Forbidden): error when creating "namespacefoo": namespaces is forbidden: User "[email protected]" cannot create namespaces at the cluster scope</p>
</blockquote>
<p>Is there a concept of "scope" in Kubernetes? I couldn't find any information about different types of scope. If I cannot create namespace at the cluster scope, where can I create the namespace? How can I check which "scopes" do I have access to?</p>
| Ufder | <p>That depends on your Kubernetes environment.</p>
<p>This <a href="https://stackoverflow.com/a/49094802/6309">answer suggest</a> (in a <a href="https://cloud.google.com/" rel="nofollow noreferrer">Google Cloud environment</a>):</p>
<blockquote>
<p>That suggests that <code>gcloud config set container/use_client_certificate</code> is set to <code>true</code> i.e. that <code>gcloud</code> is expecting a client cluster certificate to authenticate to the cluster (this is what the 'client' in the error message refers to).</p>
<p>Unsetting <code>container/use_client_certificate</code> by issuing the following command in the <code>glcoud config</code> ends the need for a legacy certificate or credentials and prevents the error message:</p>
<pre><code>gcloud config unset container/use_client_certificate
</code></pre>
<p>Issues such as this may be more likely if you are using an older version of <code>gcloud</code> on your home workstation or elsewhere.</p>
</blockquote>
<p>Still, <a href="https://github.com/kubernetes/kubernetes/issues/62361#issuecomment-397215728" rel="nofollow noreferrer">kubernetes/kubernetes issue 62361</a> mentions the same error message.</p>
| VonC |
<p>i am trying deploy a simple python flask application in digital ocean Kubernetes cluster using below deployment and service configuration. flask app is using 8080 port while running the code and same is used to expose through load balancer.</p>
<p>flask app</p>
<pre><code> if __name__ == '__main__':
app.run(debug=True, host='0.0.0.0', port=8080)
</code></pre>
<p>deployment</p>
<pre><code> ubuntu@ubuntu-22lts:~$ cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: unit7-app-deploy
spec:
replicas: 2
selector:
matchLabels:
app: unit-app
template:
metadata:
labels:
app: unit-app
spec:
containers:
- name: unit-app
image: <username>/flask-app:latest
imagePullPolicy: IfNotPresent
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 8080
</code></pre>
<p>and service for load balance creation</p>
<pre><code> apiVersion: v1
kind: Service
metadata:
name: unit7-app-service
spec:
selector:
app: unit7-app-deploy
ports:
- port: 8080
targetPort: 8080
protocol: TCP
type: LoadBalancer
</code></pre>
<p>Now I am trying to access my app through Kubernetes external IP and 8080 port which is not working. pods logs are showing that my flaks is running.</p>
<pre><code> ubuntu@ubuntu-22lts:~$ kubectl --kubeconfig=k8s-1-27-2-do-0-blr1-cluster1-kubeconfig.yaml logs -f unit7-app-deploy-6568dss8-ddsds
* Serving Flask app 'run'
* Debug mode: on
WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
* Running on all addresses (0.0.0.0)
* Running on http://127.0.0.1:8080
* Running on http://10.244.0.214:8080
</code></pre>
<p>where am I wrong. Kindly help me.</p>
| Maria628 | <p>Just in case this is part of your issue, your Service's selector does not match your Deployment's label.<br />
The selector in your service should match the labels in your deployment to establish the correct connection.</p>
<p>In your Deployment:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: unit7-app-deploy
spec:
replicas: 2
selector:
matchLabels:
app: unit-app
</code></pre>
<p>The label for the pods being deployed is <code>app: unit-app</code>.</p>
<p>But in your Service:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: unit7-app-service
spec:
selector:
app: unit7-app-deploy
</code></pre>
<p>Here, the service selector is looking for pods with label <code>app: unit7-app-deploy</code>.</p>
<p>The selector in the service configuration should match the label you defined in your Deployment configuration for the pods, which is <code>app: unit-app</code>.<br />
For instance, service configuration should look like:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: unit7-app-service
spec:
selector:
app: unit-app # Change from 'unit7-app-deploy' to 'unit-app'
ports:
- port: 8080
targetPort: 8080
protocol: TCP
type: LoadBalancer
</code></pre>
<p>That mismatch is likely the reason your load balancer is not working correctly, as it is not able to find the correct pods to send traffic to.</p>
| VonC |
<p>I have setup a kubernetes cluster with 2 <code>master</code> nodes and 4 <code>worker nodes</code>, I am trying to do an etcd backup
as described in the documentation inside my etcd container.</p>
<pre><code>ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 \
--cacert=<trusted-ca-file> --cert=<cert-file> --key=<key-file> \
snapshot save <backup-file-location>
</code></pre>
<p>but I get the following error:</p>
<pre><code>Error: unknown command "save" for "etcdctl"
</code></pre>
<p>Are there something to concider upfront?</p>
| pwoltschk | <p>Check first if this is similar to <a href="https://stackoverflow.com/q/70906419/6309">this question</a></p>
<blockquote>
<p>I forgot to set <code>$ENDPOINT</code>.</p>
</blockquote>
<p>That can happen if the <code>--endpoints</code> flag is not correctly followed by an actual endpoint.<br />
In your case, because of the lack of a specified endpoint after the <code>--endpoints</code> flag, <code>etcdctl</code> is interpreting "<code>snapshot</code>" as the endpoint, and "<code>save</code>" as the command - which could result in the error you are seeing.</p>
<p>A better formed command would be:</p>
<pre class="lang-bash prettyprint-override"><code>ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 \
--cacert=<trusted-ca-file> --cert=<cert-file> --key=<key-file> \
snapshot save <backup-file-location>
</code></pre>
<p>Do replace <code><trusted-ca-file></code>, <code><cert-file></code>, <code><key-file></code>, and <code><backup-file-location></code> with your actual file paths.</p>
<p>The <code>--endpoints</code> flag specifies the endpoint to connect to your etcd server. In a Kubernetes cluster, you typically connect to the etcd server through localhost (127.0.0.1) on port 2379, which is the default etcd client port.</p>
<hr />
<p>Also, just in case, in some shells, using the <code>\</code> character for line continuation might cause issues, try running the command all on one line:</p>
<p><code>ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 --cacert=<trusted-ca-file> --cert=<cert-file> --key=<key-file> snapshot save <backup-file-location></code></p>
<p>Another issue might be that the <code><backup-file-location></code> you are specifying does not exist or the <code>etcdctl</code> does not have permission to write to it. Make sure the directory you are trying to save the snapshot to exists and that the user running the <code>etcdctl</code> command has permission to write to it.</p>
| VonC |
<p><a href="https://i.stack.imgur.com/IOowW.png" rel="nofollow noreferrer">looks like this</a>
using windows version 10,
docker for windows(docker verion) : 18.09.2</p>
<p>how to resolve this issue ?</p>
| PRUDHVI CHOWDHARY NEKKALAPUDI | <p>Kubernetes should be running.</p>
<p>But check your cluster-info:</p>
<pre><code>> kubectl cluster-info
Kubernetes master is running at http://localhost:8080
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Unable to connect to the server: dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused it.
</code></pre>
<p>That is reported both in <a href="https://github.com/docker/machine/issues/1094#issuecomment-460817882" rel="nofollow noreferrer">docker/machine</a> and <a href="https://github.com/docker/for-win/issues/2099" rel="nofollow noreferrer">docker/for-win</a> or <a href="https://github.com/kubernetes/minikube/issues/3562#issuecomment-457142656" rel="nofollow noreferrer">kubernetes/minikube</a>.</p>
<p>While the issue is pending, and if no firewall/proxy is involved, I have seen the error caused because <a href="https://stackoverflow.com/a/52313787/6309">the port is already taken</a>.</p>
<p>See also <a href="https://www.ntweekly.com/2018/05/08/kubernetes-windows-error-unable-connect-server-dial-tcp-16445-connectex-no-connection-made-target-machine-actively-refused/" rel="nofollow noreferrer">this article</a>:</p>
<blockquote>
<h2>Issue</h2>
<p>The reason you are getting the error message is that Kuberentes is not looking into the correct configuration folder because the configuration path is not configured on the Windows 10 machine.</p>
<h2>Solution</h2>
<p>To fix the problem, I will run the command below that will tell Kubernetes where to find the configuration file on the machine.</p>
<pre><code>Powershell
[Environment]::SetEnvironmentVariable("KUBECONFIG", $HOME + "\.kube\config", [EnvironmentVariableTarget]::Machine)
</code></pre>
</blockquote>
| VonC |
<p>I use <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">Ingress-nginx</a> and I need to rewrite the <code>Host</code> header sent to the proxied Kubernetes Service based on the <code>path</code> of the original request. This must happen based on some variable or regular expression. I cannot hard code paths because this <strong>needs to work for any path</strong> automatically.</p>
<p>Basically I need something like this:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
namespace: my-ns
annotations:
# below what's commented out almost works but adds a leading "/" to header
# nginx.ingress.kubernetes.io/upstream-vhost: $request_uri.example.com
nginx.ingress.kubernetes.io/upstream-vhost: $PATH.example.com
spec:
ingressClassName: nginx
rules:
- host: 'example.com'
http:
paths:
- backend:
service:
name: my-svc
port:
number: 80
path: /$PATH
pathType: Prefix
</code></pre>
<p>I wonder if this is somehow possible to implement with annotation <code>nginx.ingress.kubernetes.io/configuration-snippet</code> or <code>nginx.ingress.kubernetes.io/server-snippet</code> but I don't understand these annotations well enough. I've tried out these annotations by passing in a <code>location</code> and also attempted to use <code>map</code> module with no luck so far.</p>
<p>Context: This reverse proxy is set up in front of Knative and Knative's routing works through <code>Host</code> header.</p>
| nichoio | <blockquote>
<p>without any <code>snippet</code> annotation</p>
</blockquote>
<p>Then... considering <a href="https://github.com/kubernetes/ingress-nginx/blob/main/rootfs/etc/nginx/lua/plugins/README.md" rel="nofollow noreferrer">NGINX Ingress annotation snippets contain LUA code execution</a> (which can be <a href="https://docs.bridgecrew.io/docs/prevent-nginx-ingress-annotation-snippets-which-contain-lua-code-execution" rel="nofollow noreferrer">a security issue</a>, by the way), you might consider using <a href="https://www.lua.org/" rel="nofollow noreferrer">Lua</a> through a <a href="https://github.com/kubernetes/ingress-nginx/blob/main/rootfs/etc/nginx/lua/plugins/README.md" rel="nofollow noreferrer">custom Lua plugin</a>.</p>
<p>A simple Lua script would be enough to capture the first segment of the request path and sets it as the <code>Host</code> header suffixed with <code>.example.com</code>.</p>
<p>Create a <code>dynamic-host-header/main.lua</code>: (inspired from <a href="https://github.com/ElvinEfendi/ingress-nginx-openidc/blob/master/rootfs/etc/nginx/lua/plugins/openidc/main.lua" rel="nofollow noreferrer">the official example</a>)</p>
<pre class="lang-lua prettyprint-override"><code>local _M = {}
function _M.rewrite()
-- Check if this is the default backend, if so, skip processing.
if ngx.var.proxy_upstream_name == "upstream-default-backend" then
return
end
local path = ngx.var.uri
local captured_path = string.match(path, "^/([^/]+)")
if captured_path then
ngx.req.set_header("Host", captured_path .. ".example.com")
end
end
return _M
</code></pre>
<p>Before building a custom <code>ingress-nginx</code> image, you can start with testing that plugin into the <code>ingress-nginx</code> pod:</p>
<p>Create a ConfigMap with the content of your plugin:</p>
<pre class="lang-bash prettyprint-override"><code>kubectl create configmap dynamic-host-header-plugin --from-file=dynamic-host-header/main.lua -n <NAMESPACE_WHERE_INGRESS_IS_RUNNING>
</code></pre>
<p>And modify the <code>ingress-nginx</code> deployment to mount this ConfigMap to <code>/etc/nginx/lua/plugins/dynamic-host-header</code> (as mentioned in <a href="https://github.com/kubernetes/ingress-nginx/blob/main/rootfs/etc/nginx/lua/plugins/README.md#installing-a-plugin" rel="nofollow noreferrer">the "installing plugin" section</a>):</p>
<pre class="lang-bash prettyprint-override"><code>kubectl edit deployment -n <NAMESPACE_WHERE_INGRESS_IS_RUNNING> nginx-ingress-controller
</code></pre>
<p>Add the volume and <code>volumeMounts</code> as follows:</p>
<pre class="lang-yaml prettyprint-override"><code>...
spec:
containers:
- name: nginx-ingress-controller
...
volumeMounts:
- mountPath: /etc/nginx/lua/plugins/dynamic-host-header
name: dynamic-host-header
...
volumes:
- name: dynamic-host-header
configMap:
name: dynamic-host-header-plugin
</code></pre>
<p>Now, you need to enable the plugin by updating the <code>nginx-ingress-controller</code>'s ConfigMap:</p>
<p>Edit the ConfigMap:</p>
<pre class="lang-bash prettyprint-override"><code>kubectl edit configmap nginx-configuration -n <NAMESPACE_WHERE_INGRESS_IS_RUNNING>
</code></pre>
<p>Add (or update) the <code>plugins</code> configuration setting:</p>
<pre class="lang-yaml prettyprint-override"><code>data:
plugins: "dynamic-host-header"
</code></pre>
<p>Finally, restart the ingress-nginx pods to pick up the changes:</p>
<pre class="lang-bash prettyprint-override"><code>kubectl rollout restart deployment/nginx-ingress-controller -n <NAMESPACE_WHERE_INGRESS_IS_RUNNING>
</code></pre>
<p>With this Lua plugin enabled, the <code>Host</code> header should be dynamically set based on the request path, as required.</p>
<hr />
<p>Note: the <a href="https://stackoverflow.com/users/6312338/nichoio">OP nichoio</a> adds in <a href="https://stackoverflow.com/questions/76835282/ingress-nginx-set-host-header-dynamically-depending-on-path/76845902?noredirect=1#comment135618666_76845902">the comments</a>:</p>
<blockquote>
<p>Turns out it I can't get it to work yet.<br />
Using <code>ngx.req.set_header()</code>, I can add new headers and alter most existing headers (e.g. <code>user-agent</code>). Host however won't change.<br />
I'm debugging this with an Ingress pointing to a svc/pod running <code>mendhak/http-https-echo</code>.</p>
</blockquote>
<p>The behavior you have observed is because the <code>Host</code> header in Nginx is a bit special. When you change it with <a href="https://github.com/openresty/lua-nginx-module#ngxreqset_header" rel="nofollow noreferrer"><code>ngx.req.set_header()</code></a>, it does not necessarily reflect in the request sent to the upstream server. The value that gets sent as the <code>Host</code> header to the upstream is defined by the <a href="https://docs.nginx.com/nginx/admin-guide/web-server/reverse-proxy/#passing-request-headers" rel="nofollow noreferrer"><code>proxy_set_header Host</code> directive</a>.</p>
<p>However, there is a workaround using Lua.</p>
<p>Instead of setting the <code>Host</code> header directly in the Lua script, you can assign the new <code>Host</code> value to a variable.</p>
<pre class="lang-lua prettyprint-override"><code>local _M = {}
function _M.rewrite()
-- Check if this is the default backend, if so, skip processing.
if ngx.var.proxy_upstream_name == "upstream-default-backend" then
return
end
local path = ngx.var.uri
local captured_path = string.match(path, "^/([^/]+)")
if captured_path then
ngx.var.dynamic_host_header = captured_path .. ".example.com"
end
end
return _M
</code></pre>
<p>Here, instead of <code>ngx.req.set_header("Host", ...)</code>, a custom variable <code>ngx.var.dynamic_host_header</code> is assigned the desired value.</p>
<p>Then, you can use this variable in a custom <code>proxy_set_header</code> directive.</p>
<p>Use the <code>nginx.ingress.kubernetes.io/configuration-snippet</code> annotation: Add this to your Ingress resource. That lets you set the custom proxy header.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
namespace: my-ns
annotations:
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header Host $dynamic_host_header;
</code></pre>
<p>The <code>proxy_set_header</code> directive in the annotation makes sure that the desired <code>Host</code> header is sent to the upstream server.</p>
<hr />
<p>The <a href="https://stackoverflow.com/users/6312338/nichoio">OP nichoio</a>, however, confirms <a href="https://stackoverflow.com/questions/76835282/ingress-nginx-set-host-header-dynamically-depending-on-path/76845902#comment135732838_76845902">in the comments</a> that:</p>
<blockquote>
<p>Unfortunately, <code>configuration-snippet</code> also didn't work.<br />
<a href="https://en.wiktionary.org/wiki/IIRC" rel="nofollow noreferrer">IIRC</a>, the <code>$dynamic_host_header</code> var is not available inside the snippet when following the alternative solution.</p>
<p>Workaround for now is to create one Ingress resource per path/header.</p>
</blockquote>
<p>Another approach you might consider is to use a service mesh like <a href="https://istio.io/" rel="nofollow noreferrer">Istio</a> to manage headers and routing rules more flexibly. Istio provides more fine-grained control over HTTP headers, including the Host header.<br />
While introducing a service mesh adds complexity, it might give you the control you need for this specific scenario.</p>
<p>In an Istio-based approach, you could leverage <a href="https://istio.io/latest/docs/reference/config/networking/virtual-service/" rel="nofollow noreferrer">Istio's <code>VirtualService</code></a> to specify the routing rules and header manipulation logic. Specifically, you can use the <a href="https://istio.io/latest/docs/reference/config/networking/virtual-service/#HTTPRoute" rel="nofollow noreferrer"><code>headers</code> field in the HTTP route</a> to rewrite the <code>Host</code> header based on the request path.</p>
<p>For instance:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: my-virtualservice
spec:
hosts:
- "*"
gateways:
- my-gateway
http:
- match:
- uri:
prefix: "/path1"
route:
- destination:
host: path1-service.default.svc.cluster.local
headers:
request:
set:
Host: "path1.example.com"
- match:
- uri:
prefix: "/path2"
route:
- destination:
host: path2-service.default.svc.cluster.local
headers:
request:
set:
Host: "path2.example.com"
# additional path-based rules
</code></pre>
<ul>
<li><p>The <code>VirtualService</code> applies to all incoming requests (<code>hosts: ["*"]</code>) on a specific Istio gateway (<code>my-gateway</code>).</p>
</li>
<li><p>Two HTTP routes are defined; each matches a URI prefix (<code>/path1</code> and <code>/path2</code>).</p>
</li>
<li><p>For each HTTP route, the <code>destination</code> specifies which Kubernetes service to route to.</p>
</li>
<li><p>The <code>headers</code> field allows you to manipulate headers. In this case, the <code>Host</code> header is rewritten based on the matching path.</p>
</li>
</ul>
<p>This works with an Istio Gateway named <code>my-gateway</code> that is configured to handle the incoming traffic. You would also need to replace <code>path1-service.default.svc.cluster.local</code> and <code>path2-service.default.svc.cluster.local</code> with the actual service names you are using in your cluster.</p>
<p>That approach provides a way to conditionally rewrite the <code>Host</code> header based on the request path, and it could scale as you can define any number of such path-based rules in the <code>VirtualService</code> specification.</p>
| VonC |
<p>I'm new to DevOps specifically using golang and microservice architecture.</p>
<p>I'm wondering if go applications should or should not be deployed in containers (Docker). In this case, I have a system built with micro-service architecture. For example here, I have 2 web services, A and B. Also, I have another web server that acts as a gateway in front of those two.</p>
<p>Both A and B need access to a database, MySQL for example. A handles table A, and B handles table B.</p>
<p>I know that in Go, source codes are compiled into a single executable binary file. And because I have 3 services here, I have 3 binaries. All three run as web servers exposing JSON REST API.</p>
<p>My questions are these:</p>
<ul>
<li><p><strong>Can I deploy these servers together in one host running on different ports?</strong>
If my host gets an IP x.x.x.x for example, my gateway can run in x.x.x.x:80, A in port 81, and B in port 82 for example. A and B will talk to a MySQL server somewhere outside or maybe inside the same host. Is this a good practice? Can Continuous Deployment works with this practice?</p>
</li>
<li><p><strong>Why should I deploy and run those binaries inside containers like Docker?</strong>
I know that since its release few years ago, Docker had found its way to be integrated inside a development workflow easily. But of course using Docker is not as simple as just compiling the code into a binary and then moving it to a deployment server. Using Docker we have to put the executable inside the container and then move the container as well to the deployment server.</p>
</li>
<li><p><strong>What about scalability and high availability without using Docker?</strong>
Can I just replicate my services and run them all at once in different hosts using a load balancer? This way I should deploy A, B, and gateway in one host, and another A, B, and gateway in another host, then set up the load balancer in front of them. A, B, and the gateway runs in ports 80, 81, and 82 respectively. This way I could have thousands of nodes in a form of VMs or LXD containers maybe, spread across hundreds of bare metals, deployed with a simple bash script and ssh, or Ansible if things get complex. Yes or no?</p>
</li>
<li><p><strong>And what about the scalability and availability of using Docker?</strong>
Should I just put all my services inside containers and manage them with something like Kubernetes to achieve high availability? Doing this does add overhead, right? Because the team will have to learn new technology like Kubernetes if they haven't known it yet.</p>
</li>
<li><p>Can you give me an example of some best practices for deploying golang services?</p>
</li>
</ul>
| gregory112 | <blockquote>
<p>I'm wondering if go applications should or should not be deployed in containers (Docker)<br>
Why should I deploy and run those binaries inside containers like Docker?</p>
</blockquote>
<p>Of course, provided you separate the build from the actual final image (in order to not include in said final image build dependencies)<br>
See "<a href="https://made2591.github.io/posts/goa-docker-multistage" rel="noreferrer">Golang, Docker and multistage build</a>" from <strong><a href="https://made2591.github.io/about/" rel="noreferrer">Matteo Madeddu</a></strong>.</p>
<blockquote>
<p>Can I deploy these servers together in one host running on different ports?</p>
</blockquote>
<p>Actually, they could all run in their own container on their own port, even if that port is the same.<br>
Intra-container communication will work, using <strong><a href="https://docs.docker.com/engine/reference/builder/#expose" rel="noreferrer">EXPOSEd port</a></strong>.
However, if they are accessed from outside, then their <a href="https://docs.docker.com/engine/reference/run/#expose-incoming-ports" rel="noreferrer"><em>published</em> port</a> need to be different indeed.</p>
<blockquote>
<p>What about scalibility and high availability without using Docker?<br>
And what about the scalability and availability of using Docker?</p>
</blockquote>
<p>As soon as you are talking about dynamic status, some kind of orchestration will be involved: see <a href="https://docs.docker.com/engine/swarm/" rel="noreferrer">Docker Swarm</a> or <a href="https://kubernetes.io/" rel="noreferrer">Kubernetes</a> for efficient cluster management.<br>
<a href="https://blog.docker.com/2017/12/kubernetes-in-docker-ee/" rel="noreferrer">Both are available with the latest docker</a>.</p>
<p>Examples:</p>
<ul>
<li>"<a href="https://medium.com/wattpad-engineering/building-and-testing-go-apps-monorepo-speed-9e9ca4978e19" rel="noreferrer"><strong>Building and testing Go apps + monorepo + speed</strong></a>": Or, how we test and build go code in a monorepo, with TravisCI, and deploy to Docker, quickly and easily. From <strong><a href="https://twitter.com/jharlap" rel="noreferrer">Jonathan Harlap</a></strong>, Principal Engineer @ Wattpad</li>
<li>"<a href="https://blog.alexellis.io/introducing-functions-as-a-service/" rel="noreferrer"><strong>Introducing Functions as a Service (OpenFaaS)</strong></a>", from <strong><a href="https://twitter.com/alexellisuk" rel="noreferrer">Alex Ellis</a></strong></li>
</ul>
| VonC |
<p>is it possible to chain requests through multiple backends with the new <a href="https://gateway-api.sigs.k8s.io/" rel="nofollow noreferrer">https://gateway-api.sigs.k8s.io/</a> ?</p>
<p>The idea is to have a flow depending on the response headers of each service ie:</p>
<p>Request -> Gateway -> [ first backend service "Custom forward Header" -> second backend service -> "Custom forward Header" -> x service ] -> Response</p>
| Leonel Franchelli | <p>In the context of the <a href="https://gateway-api.sigs.k8s.io/" rel="nofollow noreferrer">Kubernetes Gateway API</a>, it is generally designed to manage <em>inbound</em> request traffic effectively through various kinds of Gateways, such as HTTPRoute, TCPRoute, etc.</p>
<p>This question is more about managing east/west traffic — that is, the traffic flowing between services inside the same Kubernetes cluster.</p>
<p>This should be possible with a service mesh: Leveraging a service mesh such as <a href="https://istio.io/" rel="nofollow noreferrer">Istio</a> or <a href="https://linkerd.io/" rel="nofollow noreferrer">Linkerd</a> can provide more sophisticated routing capabilities, including conditional routing based on headers.<br />
That is part of the <a href="https://gateway-api.sigs.k8s.io/concepts/gamma/" rel="nofollow noreferrer">GAMMA initiative</a>, which is still very much a work in progress.</p>
<p>In the meantime, "<a href="https://wso2.com/library/blogs/the-future-of-api-gateways-on-kubernetes/" rel="nofollow noreferrer">The Future of API Gateways on Kubernetes</a>" (Aug. 2023) from <a href="https://twitter.com/pubuduspace" rel="nofollow noreferrer">Pubudu Gunatilaka</a> (Senior Technical Lead @ WSO2) points to:</p>
<blockquote>
<p>In 2022, <a href="https://twitter.com/mattklein123" rel="nofollow noreferrer">Matt Klein</a>, the creator of Envoy, introduced a new project called <a href="https://gateway.envoyproxy.io/" rel="nofollow noreferrer"><strong>Envoy Gateway</strong></a>, specifically targeting API gateways.<br />
Envoy already had the necessary components for building an API gateway, including a proxy layer; a configurable filter architecture for network traffic filtering, routing, and processing; and xDS APIs for transmitting data to Envoy proxies.<br />
The open source Envoy Gateway project adds a management layer to handle an Envoy proxy as a standalone gateway or as a Kubernetes-managed API gateway.</p>
</blockquote>
<p>You do not need an Envoy Gateway, but, as <a href="https://imesh.ai/about-us.html" rel="nofollow noreferrer">Debasree Panda</a> confirms in "<a href="https://imesh.ai/blog/what-is-envoy-gateway/" rel="nofollow noreferrer">What is Envoy Gateway, and why is it required for Kubernetes?</a>":</p>
<blockquote>
<p><a href="https://www.envoyproxy.io/" rel="nofollow noreferrer">Envoy proxy</a>, the data plane of Istio service mesh, is used for handling east-west traffic ( service-to-service communication within a data center)</p>
</blockquote>
<p>So you should be able to implement your use case, using for instance <a href="https://www.envoyproxy.io/docs/envoy/v1.27.0/configuration/http/http_filters/lua_filter.html#headers" rel="nofollow noreferrer">LUA filter / <code>headers()</code></a>.</p>
| VonC |
<p>I have downloaded the kubernetes from GitHub and now I want to run it from the downloaded file not from GitHub. Could you please help me? </p>
| HamiBU | <p>You can follow:</p>
<ul>
<li>"<a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/" rel="nofollow noreferrer">Assign Memory Resources to Containers and Pods</a>"</li>
<li>"<a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/" rel="nofollow noreferrer">Assign CPU Resources to Containers and Pods</a>"</li>
</ul>
<p>That is:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: cpu-demo
namespace: cpu-example
spec:
containers:
- name: demo-ctr
image: vish/stress
resources:
limits:
cpu: "1"
memory: "200Mi"
requests:
cpu: "0.5"
memory: "100Mi"
args:
- -cpus
- "2"
</code></pre>
<p>At the pod level: "<a href="https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/" rel="nofollow noreferrer">Configure a Pod Quota for a Namespace</a>".</p>
| VonC |
<p>Under Linux it's possible to read the control file <code>memory.usage_in_bytes</code> <a href="https://www.kernel.org/doc/Documentation/cgroup-v1/memory.txt" rel="nofollow noreferrer">for Control Group v1</a> or <code>memory.current</code> for <a href="https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v2.html" rel="nofollow noreferrer">Control Group v2</a>. How to get the memory usage of a <a href="https://learn.microsoft.com/en-us/virtualization/windowscontainers/about/" rel="nofollow noreferrer">Windows container</a>?</p>
<p>According to the <a href="https://kubernetes.io/docs/concepts/configuration/windows-resource-management/" rel="nofollow noreferrer">Windows resource management</a> Kubernetes docs, the Windows concept for the process isolation is about <a href="https://learn.microsoft.com/en-us/windows/win32/procthread/job-objects" rel="nofollow noreferrer">job objects</a>. I found that it's possible to get <a href="https://learn.microsoft.com/en-us/windows/win32/api/winnt/ns-winnt-jobobject_extended_limit_information" rel="nofollow noreferrer">JOBOBJECT_EXTENDED_LIMIT_INFORMATION</a> information providing <code>PeakJobMemoryUsed</code>. However, it appears querying this information requires elevated rights (because I got <code>Could not get job information: [0x0005] Access is denied.</code>).</p>
<p>Is there any other way to get information about Windows containers memory usage?</p>
| Christian Ammer | <blockquote>
<p>I would like to determine the memory usage from a C++ program</p>
</blockquote>
<p>To expand on <a href="https://stackoverflow.com/users/15511041/yangxiaopo-msft">YangXiaoPo-MSFT</a>'s <a href="https://stackoverflow.com/questions/76849219/how-to-retrieve-the-memory-usage-of-a-windows-container#comment135506300_76849219">comment</a>, I just tried on my PC (outside a Kubernetes setting though) to retrieve these metrics programmatically from C++ using the <a href="https://learn.microsoft.com/en-us/windows/win32/perfctrs/using-the-pdh-functions-to-consume-counter-data" rel="noreferrer">Performance Data Helper (PDH) library</a>:</p>
<p>I added <code>pdh.lib</code> to my linker input, and use it with:</p>
<p><strong>GPUProcessMemoryQuery.cpp</strong>:</p>
<pre class="lang-cpp prettyprint-override"><code>#include <windows.h>
#include <pdh.h>
#include <pdhmsg.h>
#include <iostream>
int main()
{
PDH_HQUERY hQuery = NULL;
PDH_HCOUNTER hCounter = NULL;
PDH_STATUS pdhStatus = PdhOpenQuery(NULL, NULL, &hQuery);
if (pdhStatus != ERROR_SUCCESS)
{
std::cerr << "Failed to open PDH query.\n";
return 1;
}
pdhStatus = PdhAddCounter(hQuery, L"\\GPU Process Memory(*)\\Dedicated Usage", 0, &hCounter);
if (pdhStatus != ERROR_SUCCESS)
{
std::cerr << "Failed to add PDH counter.\n";
return 1;
}
pdhStatus = PdhCollectQueryData(hQuery);
if (pdhStatus != ERROR_SUCCESS)
{
std::cerr << "Failed to collect PDH query data.\n";
return 1;
}
PDH_FMT_COUNTERVALUE counterValue;
pdhStatus = PdhGetFormattedCounterValue(hCounter, PDH_FMT_LONG, NULL, &counterValue);
if (pdhStatus != ERROR_SUCCESS)
{
std::cerr << "Failed to get formatted counter value.\n";
return 1;
}
std::wcout << L"GPU Process Memory (Dedicated Usage): " << counterValue.longValue << L"\n";
PdhCloseQuery(hQuery);
return 0;
}
</code></pre>
<p><a href="https://i.stack.imgur.com/E3GWD.png" rel="noreferrer"><img src="https://i.stack.imgur.com/E3GWD.png" alt="Visual Studio" /></a></p>
<p>I got: <code>GPU Process Memory (Dedicated Usage): 35549184</code>, without requiring elevated rights.</p>
<hr />
<p>In your case, measure the memory usage of a specific Windows container using the Performance Counters on Windows is a bit more involved.</p>
<p>You need first to determine the appropriate performance counter: Windows containers' performance metrics should be under namespaces like <code>Hyper-V Container</code> or similar (this may change depending on the Windows Server version and container runtime).</p>
<p>Before diving into C++, you might want to use <code>Performance Monitor</code> (type <a href="https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/perfmon" rel="noreferrer"><code>perfmon</code></a> in the Windows start menu) to browse available counters related to containers. That will give you an idea of what is available and the exact path you will need to use in your code.</p>
<p>Once you have the right counter path:</p>
<pre class="lang-cpp prettyprint-override"><code>// Change this line:
pdhStatus = PdhAddCounter(hQuery, L"\\GPU Process Memory(*)\\Dedicated Usage", 0, &hCounter);
// To something like:
pdhStatus = PdhAddCounter(hQuery, L"\\Hyper-V Container\\Memory Metric (or appropriate counter)", 0, &hCounter);
</code></pre>
<p>It is crucial to identify which instance of the counter you are interested in, since each container should have its own instance. In the Performance Monitor, you can see instances of each counter (like individual container names or IDs). Replace the <code>*</code> in the counter path with the exact instance name to monitor a specific container.</p>
| VonC |
<p>I have a kubernetes pod which is based on spring boot. Locally I can start jar file with external file with this commands:</p>
<pre><code>java -jar app.jar --spring.config.location=file:///Users/home/config/application.yml
</code></pre>
<p>But how to do it in kubernetes? I use Helm charts for configuration. Is there some solution with helm charts configuration?</p>
<p>I tried this:</p>
<p>default <code>application.yml</code></p>
<pre><code>init:
message: "Test message loaded from default application.yml file"
spring:
config:
location: ${CONFIG_LOCATION}
</code></pre>
<p><code>application-mockup-configuration.yml</code> insight helm chart:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: application-mockup-configuration
data:
application-mockup-configuration.yml: |
init:
message: "Test message loaded from default external application-mockup-configuration.yml file !!!!!!!!!!!"
spring:
application:
name: mockup-test
jackson:
default-property-inclusion: non_null
jmx:
enabled: false
</code></pre>
<p>deoloyment.yaml from helm chart:</p>
<pre><code> envFrom:
- secretRef:
name: mockup-secret-env
- configMapRef:
name: application-mockup-configuration-configmap-env
env:
- name: CONFIG_LOCATION
value: "/home/config/application-mockup-configuration.yml"
volumeMounts:
- name: application-mockup-configuration
mountPath: /home/config/application-mockup-configuration.yml
subPath: application-mockup-configuration.yml
</code></pre>
<p>But I the issue is that I always get the default message from the default <code>application.yml</code> file insight the test spring boot app. External file is not overriding the value for <code>init: message.....</code> Do you know how I can solve this issue?</p>
| Peter Penzov | <blockquote>
<p>"Locally, I can start jar file with external file with that command: <code>java -jar app.jar --spring.config.location=file:///Users/home/config/application.yml</code>. But how to do it in kubernetes?".</p>
</blockquote>
<p>In a Kubernetes environment..., you would <em>not</em> directly execute the <code>java -jar</code> command as you do on your local machine.<br />
Instead, this command would be defined in the Dockerfile for your application's Docker image, or in the container spec in your <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer"><code>Deployment</code></a>.</p>
<p>The command to start your application would typically look like this:</p>
<pre><code>CMD ["java", "-jar", "app.jar"]
</code></pre>
<p>Then in your Kubernetes <code>Deployment</code>, you would add an additional argument <code>--spring.config.location=file:/home/config/application.yml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>spec:
containers:
- name: my-app
image: your-image-repo/your-image-name:tag
args: ["--spring.config.location=file:/home/config/application.yml"]
volumeMounts:
- name: config-volume
mountPath: /home/config
volumes:
- name: config-volume
configMap:
name: {{ .Release.Name }}-configmap
items:
- key: application.yml
path: application.yml
</code></pre>
<p>In that configuration, the <code>args</code> field adds the <code>--spring.config.location=file:/home/config/application.yml</code> argument to the command that starts your application. When your application starts up, it reads its configuration from the <code>application.yml</code> file that was loaded from the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#add-configmap-data-to-a-volume" rel="nofollow noreferrer"><code>ConfigMap</code></a>.</p>
<p>Once you have applied the <a href="https://helm.sh/docs/chart_template_guide/getting_started/" rel="nofollow noreferrer">Helm chart</a> with <code>helm install</code> or <code>helm upgrade</code>, Kubernetes will take care of starting your application with the specified command and arguments.</p>
<p>When the pod starts, the Spring Boot application will run with the provided argument, pointing the configuration location to the file mounted from the <code>ConfigMap</code>.</p>
<hr />
<p>So, in details, in your <code>configmap.yaml</code>, define the configuration you want to provide to your application:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-configmap
data:
application.yml: |
init:
message: "Test message loaded from external application.yml file"
spring:
application:
name: mockup-test
jackson:
default-property-inclusion: non_null
jmx:
enabled: false
</code></pre>
<p>Then, in your <code>deployment.yaml</code>, define a Deployment object that mounts the ConfigMap as a volume, and add the <code>--spring.config.location</code> argument to the container spec:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}-deployment
spec:
replicas: 1
selector:
matchLabels:
app: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ .Release.Name }}
spec:
containers:
- name: my-app
image: your-image-repo/your-image-name:tag
command: ["java", "-jar", "app.jar"]
args: ["--spring.config.location=file:/home/config/application.yml"]
volumeMounts:
- name: config-volume
mountPath: /home/config
volumes:
- name: config-volume
configMap:
name: {{ .Release.Name }}-configmap
items:
- key: application.yml
path: application.yml
</code></pre>
<p>That setup assumes that the <code>app.jar</code> file is located at the root of your Docker image. If it is in a different location, you would need to update the <code>command</code> accordingly. Similarly, replace <code>your-image-repo/your-image-name:tag</code> with the path to your actual Docker image.</p>
<p>When you deploy this Helm chart with a command like <code>helm install my-release ./my-chart</code>, Helm will create a Kubernetes Deployment and ConfigMap according to these templates.<br />
The <code>Deployment</code> will start your Spring Boot application with the specified command and arguments, which will tell the application to load its configuration from the external <code>application.yml</code> file provided by the <code>ConfigMap</code>.</p>
| VonC |
<p>When I want to generate yaml by running <code>kubectl</code>, it denotes that I should denote <code>--generator=something</code> flag within the command.</p>
<p>For example, to get the deployment template via <code>kubectl</code>, I should run the below command:</p>
<pre><code>kubectl run --generator=deployment/v1beta1 nginx --image=nginx --dry-run -o yaml
</code></pre>
<p>Without mentioning <code>--generator</code> flag the CLI states in some kind that I should mention the generator flag with a proper value (e.g. <code>run-pod/v1</code>).</p>
<p>My question is, what is essentially a generator? What does it do? Are they some sort of object creation templates or something else?</p>
| Farshid | <p>That was introduced in <a href="https://github.com/kubernetes/kubernetes/commit/426ef9335865ebef43f682da90796bd8bf976637" rel="noreferrer">commit 426ef93</a>, Jan. 2016 for Kubernetes v1.2.0-alpha.8.</p>
<p><a href="https://github.com/kubernetes/kubernetes/blob/426ef9335865ebef43f682da90796bd8bf976637/docs/devel/kubectl-conventions.md#generators" rel="noreferrer">The generators</a> were described as:</p>
<blockquote>
<p>Generators are <code>kubectl</code> commands that generate resources based on a set of inputs (other resources, flags, or a combination of both).</p>
<p>The point of generators is:</p>
<ul>
<li>to enable users using <code>kubectl</code> in a scripted fashion to pin to a particular behavior which may change in the future.<br>
Explicit use of a generator will always guarantee that the expected behavior stays the same.</li>
<li>to enable potential expansion of the generated resources for scenarios other than just creation, similar to how <code>-f</code> is supported for most general-purpose commands.</li>
</ul>
</blockquote>
<p>And:</p>
<blockquote>
<p>Generator commands should obey to the following conventions:</p>
<ul>
<li>A <code>--generator</code> flag should be defined. Users then can choose between different generators, if the command supports them (for example, <code>kubectl run</code> currently supports generators for pods, jobs, replication controllers, and deployments), or between different versions of a generator so that users depending on a specific behavior may pin to that version (for example, <code>kubectl</code> expose currently supports two different versions of a service generator).</li>
<li>Generation should be decoupled from creation.<br>
A generator should implement the <code>kubectl.StructuredGenerator</code> interface and have no dependencies on cobra or the Factory</li>
</ul>
</blockquote>
| VonC |
<p>I have two containers,maybe A and B, which A should run before B, but A is a server application, which the final type is Running but not Complete, so I wonder in this way, will B be never executed? So how can I deal with it?</p>
| edselwang | <p>If A and B are part of the same pod, then <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">initContainer</a> is the legacy way to establish ordering.</p>
<p>From the <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-states" rel="nofollow noreferrer">Kubernetes Pod lifecycle</a>, I suppose you mean "Running, but no Terminated"</p>
<p>A <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="nofollow noreferrer">pod liveness/readiness probe</a> is in your case a better fit, since the server will not accept request until ready. </p>
<p>Read "<a href="https://medium.com/faun/straight-to-the-point-kubernetes-probes-e5b23e267d9d" rel="nofollow noreferrer">Straight to the Point: Kubernetes Probes</a>" from <a href="https://twitter.com/petomalina" rel="nofollow noreferrer">Peter Malina</a></p>
<blockquote>
<p>Both readiness and liveness probe run in parallel throughout the life of a container. </p>
<ul>
<li>Use the liveness probe to detect an internal failure and restart the container (e.g. HTTP server down). </li>
<li>Use the readiness probe to detect if you can serve traffic (e.g. established DB connection) and wait (not restart) for the container. </li>
</ul>
<p>A dead container is also not a ready container.<br>
To serve traffic, all containers within a pod must be ready.</p>
</blockquote>
<p>You can add a <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-readiness-gate" rel="nofollow noreferrer">pod readiness gate (stable from 1.14)</a> to specify additional conditions to be evaluated for Pod readiness.</p>
<p>Read also "<a href="https://blog.colinbreck.com/kubernetes-liveness-and-readiness-probes-how-to-avoid-shooting-yourself-in-the-foot/" rel="nofollow noreferrer">Kubernetes Liveness and Readiness Probes: How to Avoid Shooting Yourself in the Foot</a>" from <a href="https://blog.colinbreck.com/" rel="nofollow noreferrer">Colin Breck</a></p>
<p>"<a href="https://stackoverflow.com/a/53874319/6309">Should Health Checks call other App Health Checks</a>" compares that approach with the <a href="https://medium.com/@xcoulon/initializing-containers-in-order-with-kubernetes-18173b9cc222" rel="nofollow noreferrer">InitContainer approach</a></p>
| VonC |
<p>So I am having an issue with prefect-server UI connecting to my API within my kubernetes environment. I am using a nginx as my ingress.</p>
<p>I have my ingress configuration for the prefect-server service as so:</p>
<pre><code>https://myhost.com/prefect/ -> http://localhost:3000/api
</code></pre>
<p>I am able to access the UI, but it saying it cannot connect to the api:</p>
<p><a href="https://i.stack.imgur.com/Ew42r.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ew42r.png" alt="enter image description here" /></a></p>
<p>deployment.yml for prefect-server:</p>
<pre><code>...
- image: <my registry>
imagePullPolicy: Always
name: prefect-server
command: ["prefect", "server", "start", "--host", "0.0.0.0", "--log-level", "WARNING", "--port", "3000"]
workingDir: /opt/prefect
env:
- name: PREFECT_UI_API_URL
value: "http://localhost:3000/api"
</code></pre>
<p>deployment.yml for agent:</p>
<pre><code>...
- image: <my registry>
imagePullPolicy: Always
name: prefect-agent
command: ["prefect", "agent", "start", "--work-queue", "dev"]
workingDir: /opt/prefect
env:
- name: PREFECT_API_URL
value: "http://prefect-server.prefect.svc.cluster.local:3000/api" #WORKS
</code></pre>
<p>Here is the output of <code>kubectl describe service -n prefect</code></p>
<pre><code>Name: prefect-server
Namespace: prefect
Labels: app.kubernetes.io/managed-by=Helm
run=prefect-server
Annotations: meta.helm.sh/release-name: prefect
meta.helm.sh/release-namespace: prefect
Selector: run=prefect-server
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 11.11.11.11 <redacted>
IPs: 11.11.11.11 <redacted>
Port: <unset> 80/TCP
TargetPort: 3000/TCP
Endpoints: 11.22.22.22:3000 <redacted>
Session Affinity: None
Events: <none>
</code></pre>
<p>The agent is able to connect to the service container with no issues. For the prefect-server PREFECT_UI_API_URL, I have tried:</p>
<pre><code>"http://prefect-server.prefect.svc.cluster.local:3000/api"
"https://myhost.com/prefect/api"
"http://127.0.0.1:3000/api"
</code></pre>
<p><strong>UPDATE:</strong>
Thank you VonC for your help, I think I am heading in the right direction. I made the changes to the PREFECT_UI_API_URL and ingress. Also had to change the agent url to <code>http://prefect-server.prefect.svc.cluster.local:3000/prefect/api</code>. However, the "cannot to to Server API.." error goes away but the page is still 404 when I visit <code>htts://myhost.com/prefect/flow-runs</code> endpoint.</p>
<p>Output for <code>kubectl describe ingress -n prefect</code></p>
<pre><code>Name: prefect
Namespace: prefect
Address: X.X.X.50
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
/mysecret terminates
Rules:
Host Path Backends
---- ---- --------
myhost.com
/prefect prefect-server:80 (X.X.X.131:3000)
/prefect/api prefect-server:3000 (X.X.X.131:3000)
Annotations: field.cattle.io/publicEndpoints:
[{"addresses":["X.X.X.50"],"port":80,"protocol":"HTTP","serviceName":"prefect:prefect-server","ingressName":"prefect:prefect","hostname"...
ingress.kubernetes.io/secure-backends: true
ingress.kubernetes.io/ssl-redirect: true
kubernetes.io/ingress.class: f5
meta.helm.sh/release-name: prefect
meta.helm.sh/release-namespace: prefect
virtual-server.f5.com/balance: round-robin
virtual-server.f5.com/http-port: 443
virtual-server.f5.com/ip: X.X.X.50
virtual-server.f5.com/partition: kubernetes
</code></pre>
<p>My ingress.yml:</p>
<pre><code>{{- if .Values.ingress.enabled }}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: prefect
annotations:
{{- if eq .Values.ingress.class "f5" }}
{{- if .Values.ingress.tls }}
ingress.kubernetes.io/secure-backends: "true"
{{- end }}
{{- $f5healthpath := printf "%s%s" .Values.ingress.hostname .Values.ingress.path }}
{{- $f5path := trimPrefix "/" .Values.ingress.path }}
{{- $f5partition := .Values.ingress.f5partition }}
{{- $f5httpport := .Values.ingress.f5httpport}}
{{- $f5ip := .Values.ingress.f5ip }}
{{- $f5ssl := .Values.ingress.f5ssl }}
{{- $f5class:= .Values.ingress.class}}
{{- range $key, $value := .Values.ingress.annotations }}
{{ $key }}: {{ $value | replace "F5HEALTHPATH" $f5healthpath | replace "F5PATITION" $f5partition | replace "F5HTTPPORT" $f5httpport | replace "F5IP" $f5ip | replace "F5SSL" $f5ssl | replace "F5CLASS" $f5class | replace "F5PATH" $f5path | quote }}
{{- end }}
{{- else }}
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
{{- end }}
spec:
tls:
- secretName: /mysecret
rules:
- host: "myhost.com"
http:
paths:
- path: "/prefect"
pathType: Prefix
backend:
serviceName: prefect-server
servicePort: 80
- path: "/prefect/api"
pathType: Prefix
backend:
serviceName: prefect-server
servicePort: 3000
{{- end }}
</code></pre>
<p>I believe the issue may be within my ingress, I would like all the paths to the prefect-server have the prefix path <code>/prefect/</code></p>
| cozzymotto | <p>Double-check the <a href="https://docs.prefect.io/2.10.20/concepts/settings/#prefect_api_url" rel="nofollow noreferrer"><code>PREFECT_UI_API_URL</code></a> in your <code>prefect-server</code> deployment.<br />
This variable is used by the Prefect UI to understand where to find the API.</p>
<p>However, using "<code>localhost</code>" or "<code>127.0.0.1</code>" or "<code>prefect-server.prefect.svc.cluster.local</code>" would only be applicable if the UI (browser) is running on the same machine where the server is running, which is generally not the case.</p>
<ul>
<li>When using "<code>localhost</code>" or "<code>127.0.0.1</code>", the browser will try to connect to the local machine, not the server.<br />
In the context of Kubernetes, "<code>localhost</code>" or "<code>127.0.0.1</code>" will refer to the client's machine (the machine where the browser is running), not the machine where your Kubernetes cluster is running.</li>
<li>Similarly, "<code>prefect-server.prefect.svc.cluster.local</code>" is only reachable within the Kubernetes cluster network. This internal network is not exposed to the internet and therefore not accessible from a client machine outside of that network.</li>
</ul>
<p>So, when you are setting the <code>PREFECT_UI_API_URL</code> environment variable, you need to provide an external, publicly accessible URL that the browser can reach over the internet. This URL should point to the external endpoint that's been created for your Prefect API service, typically using an Ingress controller like nginx.<br />
Assuming you have set up ingress to route <code>https://myhost.com/prefect/api</code> to your Prefect API, you should use this URL as the value of <code>PREFECT_UI_API_URL</code>.</p>
<p>Try updating your <code>prefect-server</code> deployment like so:</p>
<pre class="lang-yaml prettyprint-override"><code>...
- image: <my registry>
imagePullPolicy: Always
name: prefect-server
command: ["prefect", "server", "start", "--host", "0.0.0.0", "--log-level", "WARNING", "--port", "3000"]
workingDir: /opt/prefect
env:
- name: PREFECT_UI_API_URL
value: "https://myhost.com/prefect/api"
</code></pre>
<p>And you need to ensure that your ingress routes <code>https://myhost.com/prefect/api</code> to the Prefect API service in your Kubernetes cluster properly. Double-check your nginx ingress configuration, it should be something like:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: prefect-ingress
spec:
rules:
- host: "myhost.com"
http:
paths:
- pathType: Prefix
path: "/prefect/api"
backend:
service:
name: prefect-server
port:
number: 3000
</code></pre>
<p>Do replace <code>myhost.com</code> with your actual domain name and check if the service name and port number match your cluster configuration.</p>
<hr />
<p>From the updated information you have provided, it seems you have made good progress. However, you have correctly identified that there may still be an issue with your Ingress configuration. It seems like the nginx rewrite target annotation is missing, which is important when using the ingress controller to rewrite the requested URLs.</p>
<p>The annotation <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/#deployment" rel="nofollow noreferrer"><code>nginx.ingress.kubernetes.io/rewrite-target: /$2</code></a> will replace the URL path from the incoming request to match the target service in the Kubernetes cluster. That annotation is important for a scenario where you want to add a prefix path to your service (in your case, "<code>/prefect/</code>").</p>
<p>Consider this part of your <code>ingress.yaml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>{{- else }}
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
{{- end }}
</code></pre>
<p>That block seems to be intended for an nginx ingress, and the rewrite target is set correctly here for an nginx ingress. However, if you are using an F5 ingress as indicated by your configuration (<code>kubernetes.io/ingress.class: f5</code>), this block will not be applied, and the nginx-specific rewrite rules will not be effective.</p>
<p>If you are using the F5 ingress controller, it may not recognize the <code>nginx.ingress.kubernetes.io/rewrite-target: /$2</code> annotation, since it's specific to the nginx ingress controller. Therefore, the F5 ingress controller may not have the ability to rewrite the URL path as expected.</p>
<p>You may need to consult the <a href="https://clouddocs.f5.com/" rel="nofollow noreferrer">F5 documentation</a> or support for guidance on how to perform URL rewrites or equivalent configurations to achieve your goal. I would check if F5 provides any URL rewriting capabilities similar to nginx's <code>rewrite-target</code>: F5 may have a different mechanism or annotation for doing this, which would need to be configured accordingly to match your desired <code>/prefect/</code> path prefix.</p>
<p>If the F5 controller doesn't provide this functionality or if you find it easier to configure with nginx, then switching to the nginx ingress controller might be an option to consider.</p>
| VonC |
<p>I have a few questions regarding the configMap versioning.</p>
<ol>
<li><p>Is it possible to use a specific version of a configMap in the deployment file?</p>
</li>
<li><p>I don't see any API's to get list of versions. How to get the list of versions?</p>
</li>
<li><p>Is it possible to compare configMap between versions?</p>
</li>
<li><p>How to control the number of versions?</p>
</li>
</ol>
| user1578872 | <blockquote>
<p>Is it possible to use a specific version of a configMap in the deployment file?</p>
</blockquote>
<p>Not really.<br>
The closest notion of a "version" is resourceVersion, but that is not for the user to directly act upon.</p>
<p>See <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md#concurrency-control-and-consistency" rel="nofollow noreferrer">API conventions: concurrency control and consistency</a>:</p>
<blockquote>
<p>Kubernetes leverages the concept of resource versions to achieve optimistic concurrency. All Kubernetes resources have a "<code>resourceVersion</code>" field as part of their metadata. This <code>resourceVersion</code> is a string that identifies the internal version of an object that can be used by clients to determine when objects have changed.<br>
When a record is about to be updated, it's version is checked against a pre-saved value, and if it doesn't match, the update fails with a <code>StatusConflict</code> (HTTP status code 409).</p>
<p>The <code>resourceVersion</code> is changed by the server every time an object is modified. If <code>resourceVersion</code> is included with the <code>PUT</code> operation the system will verify that there have not been other successful mutations to the resource during a read/modify/write cycle, by verifying that the current value of <code>resourceVersion</code> matches the specified value.</p>
<p>The <code>resourceVersion</code> is currently backed by etcd's <code>modifiedIndex</code>.<br>
However, it's important to note that the application should not rely on the implementation details of the versioning system maintained by Kubernetes. We may change the implementation of <code>resourceVersion</code> in the future, such as to change it to a timestamp or per-object counter.</p>
<p>The only way for a client to know the expected value of <code>resourceVersion</code> is to have received it from the server in response to a prior operation, typically a <code>GET</code>. This value MUST be treated as opaque by clients and passed unmodified back to the server.<br>
Clients should not assume that the resource version has meaning across namespaces, different kinds of resources, or different servers.<br>
Currently, the value of <code>resourceVersion</code> is set to match etcd's sequencer. You could think of it as a logical clock the API server can use to order requests.
However, we expect the implementation of <code>resourceVersion</code> to change in the future, such as in the case we shard the state by kind and/or namespace, or port to another storage system.</p>
<p>In the case of a conflict, the correct client action at this point is to <code>GET</code> the resource again, apply the changes afresh, and try submitting again.<br>
This mechanism can be used to prevent races like the following:</p>
</blockquote>
<pre><code>Client #1 Client #2
GET Foo GET Foo
Set Foo.Bar = "one" Set Foo.Baz = "two"
PUT Foo PUT Foo
</code></pre>
<blockquote>
<p>When these sequences occur in parallel, either the change to <code>Foo.Bar</code> or the change to <code>Foo.Baz</code> can be lost.</p>
<p>On the other hand, when specifying the <code>resourceVersion</code>, one of the <code>PUT</code>s will fail, since whichever write succeeds changes the <code>resourceVersion</code> for <code>Foo</code>.</p>
<p><code>resourceVersion</code> may be used as a precondition for other operations (e.g., <code>GET</code>, <code>DELETE</code>) in the future, such as for read-after-write consistency in the presence of caching.</p>
<p>"Watch" operations specify <code>resourceVersion</code> using a query parameter. It is used to specify the point at which to begin watching the specified resources.<br>
This may be used to ensure that no mutations are missed between a <code>GET</code> of a resource (or list of resources) and a subsequent Watch, even if the current version of the resource is more recent.<br>
This is currently the main reason that list operations (<code>GET</code> on a collection) return <code>resourceVersion</code>.</p>
</blockquote>
| VonC |
<h1>How to deploy dash/flask applications on kubernetes</h1>
<p>I have developed a dash application consisting of three main components:</p>
<ol>
<li>A python backend script</li>
<li>A flask webserver</li>
<li>A dash frontend</li>
</ol>
<p>The backend fetches data from a web API and POSTs it to various webserver endpoints. The frontend GETs data from various endpoints and presents the data in a dashboard.</p>
<p>All components are deployed in kubernetes. The frontend and the flask webserver are deployed as services. Both are based on the Flask's runtime development webserver, which I am noticing is not capable of handle a lot of traffic. I would like to have both of these deployed with nginx and gunicorn, but I am not sure how achieve this for my kubernetes setup.</p>
<p>Below are examples of source code (not a complete and executable example, but just to show how the application is designed) and examples of kubernetes manifests for <code>Deployment</code> and <code>Service</code>.</p>
<h2>Source code</h2>
<h3>Flask webserver</h3>
<pre class="lang-py prettyprint-override"><code>from flask import Flask, request, jsonify
current_data = {"message": "No timeseries data yet"}
current_vessel_data = {"message": "No vessel timeseries data yet"}
current_timing = {"message": "No timing data yet"}
current_gangway_status = {"message": "No gangway status yet"}
HOST = "0.0.0.0"
PORT = 9898
app = Flask(__name__)
@app.route("/post_timeseries_data", methods=["POST"])
def receive_data() -> str:
# log("This is the backend posting timeseries data.", prefix=logger_prefix)
global current_data
data = request.data
current_data = data
return jsonify({"message": "Data received successfully"})
@app.route("/get_timeseries_data", methods=["GET"])
def get_timeseries_data() -> str:
# log("This is a frontend instance requesting timeseries data.", prefix=logger_prefix)
global current_data
return current_data
@app.route("/post_timing_data", methods=["POST"])
def receive_timing_data() -> str:
# log("This is the backend posting timing data.", prefix=logger_prefix)
global current_timing
timing = request.data
current_timing = timing
return jsonify({"message": "Timing received successfully"})
@app.route("/get_timing_data", methods=["GET"])
def get_timing_data() -> str:
# log("This is a frontend instance requesting timing data.", prefix=logger_prefix)
global current_timing
return current_timing
@app.route("/post_gangway_status", methods=["POST"])
def receive_gangway_status() -> str:
# log("This is the backend posting gangway status.", prefix=logger_prefix)
global current_gangway_status
gangway_status = request.data
current_gangway_status = gangway_status
return jsonify({"message": "Gangway status received successfully."})
@app.route("/get_gangway_status", methods=["GET"])
def get_gangway_status() -> str:
# log("This is a frontend instance requesting gangway status.", prefix=logger_prefix)
global current_gangway_status
return current_gangway_status
@app.route("/post_timeseries_data_vessel", methods=["POST"])
def post_timeseries_data_vessel() -> str:
global current_vessel_data
vessel_data = request.data
current_vessel_data = vessel_data
return jsonify({"message": "Vessel timeseries data received successfully."})
@app.route("/get_timeseries_data_vessel", methods=["GET"])
def get_timeseries_data_vessel() -> str:
global current_vessel_data
return current_vessel_data
@app.route("/")
def root_content() -> str:
"""For the readiness probe."""
return "This is the root page of the W2W file server."
if __name__ == "__main__":
app.run(host=HOST, port=PORT)
</code></pre>
<h3>Python script backend</h3>
<p>I don't want to give the entire backend code, but essentially it is a while loop continuously running fetching data from a web API and posting timeseries data and timings to the POST endpoints defined in the webserver above. The skeleton is something like this:</p>
<pre class="lang-py prettyprint-override"><code>import requests
import pandas as pd
import time
host = ""
post = 9898
post_path = ""
while True:
now = pd.Timestamp.now(tz="CET").floor("1s")
start= now - pd.Timedelta(minutes=30)
t0 = time.perf_counter()
df = get_data(start=start, end=now)
timing = time.perf_counter() - t0
data = {"df": df.to_json(date_format="iso", "timing": timing)
requests.post(post_path, json=json.dumps(data))
</code></pre>
<h3>Dash frontend</h3>
<p>The dash app is a fairly standard multipage app. The main file <code>app.py</code>looks like this, but with the callback details removed for clarity. The <code>main()</code> function returns the Dash instance.</p>
<pre class="lang-py prettyprint-override"><code>import dash
import ids
import requests
import json
import utils
import time
import pandas as pd
import dash_bootstrap_components as dbc
from logger import log
from dash import Dash, html, dcc, callback, ctx
from dash.dependencies import Input, Output, State
from dash.exceptions import PreventUpdate
from typing import Tuple, Union
logger_prefix = "W2W Frontend"
APP_TITLE = "W2W Vessel Insight Dashbord"
INTERVAL_REFRESH = 5 * 1000
###################################
###### P A R A M E T E R S ######
###################################
DEBUG = False
APP_PORT = 9898 if not DEBUG else 8080
DATA_PORT = 9898
DATA_HOST = "walk-to-work-fileserver-service.walk-to-work.svc.cluster.local" if not DEBUG else "127.0.0.1"
APP_HOST = "0.0.0.0" if not DEBUG else "127.0.0.1"
path_timeseries_data = f"http://{DATA_HOST}:{DATA_PORT}/get_timeseries_data"
path_timeseries_data_vessel = f"http://{DATA_HOST}:{DATA_PORT}/get_timeseries_data_vessel"
path_timing_data = f"http://{DATA_HOST}:{DATA_PORT}/get_timing_data"
path_gangway_status = f"http://{DATA_HOST}:{DATA_PORT}/get_gangway_status"
###################################
def main() -> Dash:
"""Main function for creating the application.
Returns:
Dash: Dash application instance
"""
app = Dash(__name__, external_stylesheets=[dbc.themes.SLATE], use_pages=True)
app.title = APP_TITLE
@callback(
[
Output(ids.STORE_CURRENT_DATA_VESSEL, "data"),
Output(ids.STORE_PREVIOUS_DATA_VESSEL, "data")
],
[
Input(ids.INTERVAL_TIMER, "n_intervals"),
Input(ids.STORE_DROPDOWN_DATA_LENGTH_VALUE, "data")
],
State(ids.STORE_CURRENT_DATA_VESSEL, "data")
)
def get_data(n: int, data_length: str, previous_data: str) -> Tuple[str]:
<code here>
app.layout = html.Div(children=[
# Page container necessary for multipage
dash.page_container,
# Automatic refresh interval
dcc.Interval(id=ids.INTERVAL_TIMER, interval=INTERVAL_REFRESH, n_intervals=0),
# Store data fetched from Vessel Insight API
dcc.Store(id=ids.STORE_CURRENT_DATA),
dcc.Store(id=ids.STORE_PREVIOUS_DATA),
dcc.Store(id=ids.STORE_BRIDGE_STATUS),
dcc.Store(id=ids.STORE_TIMING_DATA),
dcc.Store(id=ids.STORE_DROPDOWN_DATA_LENGTH_VALUE), # Used to pass dropdown selection to this callback.
dcc.Store(id=ids.STORE_CALLBACK_START_OF_TIMING),
dcc.Store(id=ids.STORE_CURRENT_DATA_VESSEL),
dcc.Store(id=ids.STORE_PREVIOUS_DATA_VESSEL)
])
return app
if __name__ == "__main__":
app = main()
app.run(debug=True, port=APP_PORT, host=APP_HOST, dev_tools_hot_reload=False)
</code></pre>
<h2>Kubernetes manifests</h2>
<p>The frontend and webserver are deployed as services, while the backend is not. The deployment and service manifests are more or less identical for all three components, so I just give one example of each:</p>
<h3><code>deployment.yaml</code></h3>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: ${myappname_component}-fileserver
namespace: ${namespace}
labels:
app.kubernetes.io/name: ${myappname_component}-fileserver
app.kubernetes.io/created-by: ${application}
app.kubernetes.io/component: ${myappname_component}-fileserver
spec:
selector:
matchLabels:
app: ${myappname_component}-fileserver
template:
metadata:
labels:
app: ${myappname_component}-fileserver
app.kubernetes.io/name: ${myappname_component}-fileserver
app.kubernetes.io/created-by: ${application}
app.kubernetes.io/component: ${myappname_component}-fileserver
spec:
serviceAccountName: ${keyvault_mi_service_account_name}
securityContext:
runAsUser: 1000
fsGroup: 1000
runAsNonRoot: true
seccompProfile:
type: "RuntimeDefault"
containers:
- name: ${fileserver_image_name}
image: ${container_registry}/${application_image_repository}/${fileserver_image_name}:${fileserver_image_version}
imagePullPolicy: Always
ports:
- name: http-app
containerPort: ${app_port}
protocol: TCP
livenessProbe:
httpGet:
path: /
port: ${app_port}
initialDelaySeconds: 10
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 30
readinessProbe:
httpGet:
path: /
port: ${app_port}
initialDelaySeconds: 10
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 30
resources:
limits:
cpu: 400m
memory: 400Mi
requests:
cpu: 200m
memory: 200Mi
securityContext:
runAsUser: 1000
runAsNonRoot: true
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
</code></pre>
<h3><code>service.yaml</code></h3>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: ${myappname_component}-fileserver-service
namespace: ${namespace}
labels:
app.kubernetes.io/created-by: ${application}
app.kubernetes.io/name: ${myappname_component}
app.kubernetes.io/component: ${myappname_component}
spec:
selector:
app.kubernetes.io/component: ${myappname_component}-fileserver
ports:
- name: ${myappname_component}-fileserver-service
port: ${app_port}
targetPort: ${app_port}
</code></pre>
| Yoda | <p>You probably ran your <a href="https://flask.palletsprojects.com/en/2.3.x/tutorial/factory/" rel="nofollow noreferrer">Flask app</a> directly with the Flask built-in server:</p>
<pre class="lang-py prettyprint-override"><code>if __name__ == "__main__":
app.run(host='0.0.0.0')
</code></pre>
<p>You should prepare your Flask app to be <a href="https://stackoverflow.com/q/34478320/6309">compatible with Gunicorn</a>.</p>
<ul>
<li>Make sure your Flask app's main instance is named <code>app</code>.</li>
<li>Test the Flask application locally with Gunicorn:
<pre class="lang-bash prettyprint-override"><code>gunicorn app:app --bind 0.0.0.0:8000 --workers 4
</code></pre>
</li>
</ul>
<p>Then you can start changing Docker.<br />
Your Dockerfile might have been as follows, using Flask's built-in server:</p>
<pre><code>FROM python:3.7
WORKDIR /app
COPY . /app
RUN pip install -r requirements.txt
CMD ["python", "app.py"]
</code></pre>
<p>You would need to use <a href="https://gunicorn.org/" rel="nofollow noreferrer">Gunicorn</a> to serve your Flask application in the Docker container.</p>
<p>For that, modify your <code>Dockerfile</code> to include Gunicorn:</p>
<pre><code>FROM python:3.7
WORKDIR /app
COPY . /app
RUN pip install -r requirements.txt
CMD ["gunicorn", "app:app", "--bind", "0.0.0.0:8000", "--workers", "4"]
</code></pre>
<p>And build, then test the Docker image locally:</p>
<pre class="lang-bash prettyprint-override"><code>docker build -t your-image-name .
docker run -p 8000:8000 your-image-name
</code></pre>
<p>Finally, change your basic Kubernetes deployment that is currently exposes the Flask app.</p>
<p>You can introduce Nginx as a reverse proxy alongside your Flask app in the same pod. That would also involve a <code>ConfigMap</code> for Nginx configuration.<br />
See for example "<strong><a href="https://www.kisphp.com/kubernetes/deploy-a-flask-application-with-nginx-on-kubernetes" rel="nofollow noreferrer">Deploy a flask application with nginx on Kubernetes</a></strong>"</p>
<p>Create a <code>ConfigMap</code> for Nginx: <code>nginx-configmap.yaml</code></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
data:
nginx.conf: |
user nginx;
...
# Rest of the Nginx configuration
</code></pre>
<p>Apply it:</p>
<pre class="lang-bash prettyprint-override"><code>kubectl apply -f nginx-configmap.yaml
</code></pre>
<p>Then, modify your <code>deployment.yaml</code> to include both the Flask (Gunicorn) and Nginx containers.<br />
For instance, you would go from your current code:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
...
spec:
...
template:
...
spec:
containers:
- name: flask-app
image: your-flask-image
ports:
- containerPort: 5000
</code></pre>
<p>To:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
...
spec:
...
template:
...
spec:
volumes:
- name: nginx-config-volume
configMap:
name: nginx-config
containers:
- name: flask-app
image: your-flask-image
ports:
- containerPort: 8000
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
volumeMounts:
- name: nginx-config-volume
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
</code></pre>
<p>And apply the updated deployment:</p>
<pre class="lang-bash prettyprint-override"><code>kubectl apply -f deployment.yaml
</code></pre>
<p>Finally, you can make sure everything is running with:</p>
<pre class="lang-bash prettyprint-override"><code>kubectl get pods
kubectl logs <pod_name> -c <container_name>
</code></pre>
<p>Access your application using the appropriate Kubernetes service endpoint or ingress to validate it works as expected.</p>
| VonC |
<p>I am trying to run a kubernetes cluster on mac os for some prototyping using docker (not vagrant or virtualbox). </p>
<p>I found the instructions online at <a href="https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/getting-started-guides/docker.md" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/getting-started-guides/docker.md</a> but the instructions are 3 years old (Oct 2015).<br>
The instructions refer to boot2docker but the present version of docker on mac (Docker Community Edition v 18.06.1-ce-mac73) doesn't have boot2docker. </p>
<p>Can you point me to the latest instructions?</p>
| user674669 | <p>Since 2015, everything has been move to the <a href="https://github.com/kubernetes/website" rel="nofollow noreferrer">Kubernetes website GitHub repo</a>.</p>
<p>The full installation/process page is now at <a href="https://kubernetes.io/docs/tasks/" rel="nofollow noreferrer"><code>kubernetes.io/docs/tasks/</code></a>.</p>
<p>And since <a href="https://github.com/kubernetes/website/issues/7307" rel="nofollow noreferrer">issue 7307</a>, a Kubernetes installation on MacOs would no longer use xHyve, but, as <a href="https://kubernetes.io/docs/tasks/tools/install-minikube/" rel="nofollow noreferrer">stated in the documentation</a>:</p>
<blockquote>
<p>macOS: <a href="https://www.virtualbox.org/wiki/Downloads" rel="nofollow noreferrer">VirtualBox</a> or <a href="https://www.vmware.com/products/fusion" rel="nofollow noreferrer">VMware Fusion</a>, or HyperKit.</p>
</blockquote>
| VonC |
<p>I am using clickhouse database and data are stored at <code>/media/user/data/clickhouse</code> and <code>/media/user/data/clickhouse-server</code>. When I run a docker container</p>
<pre><code>$ docker run \
--name local-clickhouse \
--ulimit nofile=262144:262144 \
-u 1000:1000 \
-p 8123:8123 \
-p 9000:9000 \
-p 9009:9009 \
-v /media/user/data/clickhouse:/var/lib/clickhouse \
-v /media/user/data/clickhouse-server:/var/log/clickhouse-server \
-dit clickhouse/clickhouse-server
</code></pre>
<p>I see the data and everything is fine. I am trying to run this in a pod using minikube with following persistent volume configs:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: host-pv-clickhouse
spec:
capacity:
storage: 4000Gi
volumeMode: Filesystem
storageClassName: standard
accessModes:
- ReadWriteOnce
hostPath:
path: /media/user/data/clickhouse
type: DirectoryOrCreate
</code></pre>
<p>and</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: host-pv-clickhouse-server
spec:
capacity:
storage: 4000Gi
volumeMode: Filesystem
storageClassName: standard
accessModes:
- ReadWriteOnce
hostPath:
path: /media/user/data/clickhouse-server
type: DirectoryOrCreate
</code></pre>
<p>Additionally, I also have persistent volume claims:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: host-pvc-clickhouse-server
spec:
volumeName: host-pv-clickhouse-server
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2000Gi
</code></pre>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: host-pvc-clickhouse
spec:
volumeName: host-pv-clickhouse
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2000Gi
</code></pre>
<p>and finally service and deployment:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: clickhouse
spec:
type: NodePort
ports:
- protocol: TCP
name: tcp
port: 9000
targetPort: 9000
nodePort: 30003
- protocol: TCP
name: http
port: 8123
targetPort: 8123
nodePort: 30004
- protocol: TCP
name: interncomm
port: 9009
targetPort: 9009
nodePort: 30005
selector:
app: clickhouse
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: clickhouse
labels:
app: clickhouse
spec:
replicas: 1
selector:
matchLabels:
app: clickhouse
template:
metadata:
labels:
app: clickhouse
spec:
containers:
- name: clickhouse
image: clickhouse/clickhouse-server:latest
ports:
- containerPort: 8123
name: http
- containerPort: 9000
name: tcp
- containerPort: 9009
name: interncomm
volumeMounts:
- name: clickhouse-volume
mountPath: /var/lib/clickhouse
- name: clickhouse-server-volume
mountPath: /var/log/clickhouse-server
volumes:
- name: clickhouse-volume
persistentVolumeClaim:
claimName: host-pvc-clickhouse
- name: clickhouse-server-volume
persistentVolumeClaim:
claimName: host-pvc-clickhouse-server
</code></pre>
<p>When I run
<code>kubectl apply -f chdb_node.yaml</code> it works and I can access the database via clickhouse's web gui. However, the data aren't there.</p>
<p>Any suggestions to how to fix this?</p>
| MoneyBall | <p>Check first the data at <code>/media/user/data/clickhouse</code> and <code>/media/user/data/clickhouse-server</code> on your host machine indeed contains the expected ClickHouse data.</p>
<p>And remember that, when you use Minikube, it typically runs in a virtual machine (like VirtualBox). It means the <code>hostPath</code> you are providing is referencing the filesystem of the VM, not your actual host.<br />
To use host filesystem in Minikube with <code>hostPath</code>, you should make sure that the directory is properly mounted into the Minikube VM, using <a href="https://minikube.sigs.k8s.io/docs/handbook/mount/" rel="nofollow noreferrer"><code>minikube mount</code></a>.</p>
<pre class="lang-bash prettyprint-override"><code>minikube mount /media/user/data:/media/user/data
</code></pre>
<p>That would mount the <code>/media/user/data</code> directory from your host into the Minikube VM at the same location. After doing this, your <code>hostPath</code> configuration should work as expected.</p>
<p>Warning: Your Docker run command specifies a user and group with UID and GID as <code>1000:1000</code>. Ensure that the files and directories at <code>/media/user/data/clickhouse</code> and <code>/media/user/data/clickhouse-server</code> are owned by this UID and GID. If not, the ClickHouse server might not be able to read them.<br />
In your Kubernetes pod, you did not specify the user or group to run the container. You might want to set the same user as you did with Docker using <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/" rel="nofollow noreferrer">the <code>securityContext</code></a>:</p>
<pre class="lang-yaml prettyprint-override"><code>spec:
containers:
- name: clickhouse
...
securityContext:
runAsUser: 1000
runAsGroup: 1000
</code></pre>
| VonC |
<p>I am trying to setup a spark cluster on k8s. I've managed to create and setup a cluster with three nodes by following this article:
<a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/" rel="noreferrer">https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/</a></p>
<p>After that when I tried to deploy spark on the cluster it failed at spark submit setup.
I used this command:</p>
<pre><code>~/opt/spark/spark-2.3.0-bin-hadoop2.7/bin/spark-submit \
--master k8s://https://206.189.126.172:6443 \
--deploy-mode cluster \
--name word-count \
--class org.apache.spark.examples.SparkPi \
--conf spark.executor.instances=5 \
--conf spark.kubernetes.container.image=docker.io/garfiny/spark:v2.3.0 \
—-conf spark.kubernetes.driver.pod.name=word-count \
local:///opt/spark/examples/jars/spark-examples_2.11-2.3.0.jar
</code></pre>
<p>And it gives me this error:</p>
<pre><code>Exception in thread "main" org.apache.spark.SparkException: The Kubernetes mode does not yet support referencing application dependencies in the local file system.
at org.apache.spark.deploy.k8s.submit.DriverConfigOrchestrator.getAllConfigurationSteps(DriverConfigOrchestrator.scala:122)
at org.apache.spark.deploy.k8s.submit.KubernetesClientApplication$$anonfun$run$5.apply(KubernetesClientApplication.scala:229)
at org.apache.spark.deploy.k8s.submit.KubernetesClientApplication$$anonfun$run$5.apply(KubernetesClientApplication.scala:227)
at org.apache.spark.util.Utils$.tryWithResource(Utils.scala:2585)
at org.apache.spark.deploy.k8s.submit.KubernetesClientApplication.run(KubernetesClientApplication.scala:227)
at org.apache.spark.deploy.k8s.submit.KubernetesClientApplication.start(KubernetesClientApplication.scala:192)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:879)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:197)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:227)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:136)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
</code></pre>
<p>2018-06-04 10:58:24 INFO ShutdownHookManager:54 - Shutdown hook called
2018-06-04 10:58:24 INFO ShutdownHookManager:54 - Deleting directory /private/var/folders/lz/0bb8xlyd247cwc3kvh6pmrz00000gn/T/spark-3967f4ae-e8b3-428d-ba22-580fc9c840cd</p>
<p>Note: I followed this article for installing spark on k8s.
<a href="https://spark.apache.org/docs/latest/running-on-kubernetes.html" rel="noreferrer">https://spark.apache.org/docs/latest/running-on-kubernetes.html</a></p>
| garfiny | <p>The error message comes from <a href="https://github.com/apache/spark/commit/5d7c4ba4d73a72f26d591108db3c20b4a6c84f3f" rel="noreferrer">commit 5d7c4ba4d73a72f26d591108db3c20b4a6c84f3f</a> and include the page you mention: "<a href="https://spark.apache.org/docs/latest/running-on-kubernetes.html#dependency-management" rel="noreferrer">Running Spark on Kubernetes</a>" with the mention that you indicate:</p>
<pre class="lang-scala prettyprint-override"><code>// TODO(SPARK-23153): remove once submission client local dependencies are supported.
if (existSubmissionLocalFiles(sparkJars) || existSubmissionLocalFiles(sparkFiles)) {
throw new SparkException("The Kubernetes mode does not yet support referencing application " +
"dependencies in the local file system.")
}
</code></pre>
<p>This is described in <a href="https://issues.apache.org/jira/browse/SPARK-18278" rel="noreferrer">SPARK-18278</a>:</p>
<blockquote>
<p>it wouldn't accept running a local: jar file, e.g. <code>local:///opt/spark/examples/jars/spark-examples_2.11-2.2.0-k8s-0.5.0.jar</code>, on my spark docker image (<code>allowsMixedArguments</code> and <code>isAppResourceReq booleans</code> in <code>SparkSubmitCommandBuilder.java</code> get in the way). </p>
</blockquote>
<p>And this is linked to <a href="https://github.com/kubernetes/kubernetes/issues/34377" rel="noreferrer">kubernetes issue 34377</a></p>
<p>The <a href="https://issues.apache.org/jira/browse/SPARK-22962" rel="noreferrer">issue SPARK-22962 "Kubernetes app fails if local files are used"</a> mentions:</p>
<blockquote>
<p>This is the resource staging server use-case. We'll upstream this in the 2.4.0 timeframe.</p>
</blockquote>
<p>In the meantime, that error message was introduced in <a href="https://github.com/apache/spark/pull/20320" rel="noreferrer">PR 20320</a>.</p>
<p>It includes the comment:</p>
<blockquote>
<p>The manual tests I did actually use a main app jar located on gcs and http.<br>
To be specific and for record, I did the following tests:</p>
<ul>
<li>Using a gs:// main application jar and a http:// dependency jar. Succeeded.</li>
<li>Using a https:// main application jar and a http:// dependency jar. Succeeded.</li>
<li>Using a local:// main application jar. Succeeded.</li>
<li>Using a file:// main application jar. Failed.</li>
<li>Using a file:// dependency jar. Failed.</li>
</ul>
</blockquote>
<p>That issue should been fixed by now, and the <a href="https://stackoverflow.com/users/714376/garfiny">OP garfiny</a> confirms <a href="https://stackoverflow.com/questions/50637250/spark-on-k8s-getting-error-kube-mode-not-support-referencing-app-depenpendcie/50673683?noredirect=1#comment88396515_50673683">in the comments</a>:</p>
<blockquote>
<p>I used the newest <code>spark-kubernetes jar</code> to replace the one in <code>spark-2.3.0-bin-hadoop2.7</code> package. The exception is gone.</p>
</blockquote>
| VonC |
<p>I'm running a Nextflow workflow using the Kubernetes executor in a shared cluster that cannot allow privileged containers for security reasons.</p>
<p>When I run my nextflow workflow I get the error that clearly indicates that it's attempting to run my job in privileged mode:</p>
<pre><code>job-controller Error creating: admission webhook "validation.gatekeeper.sh" denied the request:
[psp-privileged-container] Privileged container is not allowed: nf-f48d29c6300b8f61af05447de0072d69,
securityContext: {"privileged": true}
</code></pre>
<p>The Nextflow documentation suggests that I can set <code>privileged=false</code> in the <code>k8s.securityContext</code>. I've tried two different values to set privileged to <code>false</code> (one based on Nextflow docs, <code>privileged</code>, the other based on Kubernetes docs, <code>allowPrivilegeEscalation</code>). Both still yield the same error.</p>
<pre><code>// Kubernetes config
process.executor = 'k8s'
k8s.namespace = 'mynamespace'
k8s.computeResourceType = 'Job'
k8s.securityContext.privileged = false // nextflow doc format
k8s.securityContext.allowPrivilegeEscalation = false // k8s doc format
</code></pre>
<p>I see the following related discussions that seem to suggest that this should work, but I'm having a hard time following these threads, or at least these threads seem to indicate that my <code>nextflow.config</code> is correct.</p>
<ul>
<li><p>[ERROR controller-runtime.controller - Privileged containers are not allowed spec.containers[1].securityContext.privileged #792][1]</p>
</li>
<li><p>[feat: Support arbitrarily setting privileged: true for runner container #1383][2]</p>
</li>
</ul>
<hr />
<p>Update:</p>
<p>When nextflow submits my job, here's what the yaml looks like that Kubernetes receives. It looks like the job security context is empty, but that the pod template at <code>spec.template.spec.containers[0].securityContext.privileged = true</code>.</p>
<p>Noticing that I also tried setting <code>k8s.containers.securityContext.privileged = false</code> in <code>nextflow.config</code> but that was a wrong guess, I don't know how to index into the pod template settings.</p>
<p>It seems like I just need to figure out how to reference the pod template <code>securityContext</code> from <code>nextflow.config</code>.</p>
<pre><code>$ kubectl get job nf-878eb722258681cf0031aeeabe2fb132 -n mynamespace -o yaml
apiVersion: batch/v1
kind: Job
metadata:
annotations:
batch.kubernetes.io/job-tracking: ""
creationTimestamp: "2023-09-12T17:16:58Z"
generation: 1
labels:
nextflow.io/app: nextflow
nextflow.io/processName: helloWorld1
nextflow.io/runName: focused_pesquet
nextflow.io/sessionId: uuid-2a634652-ba40-4905-a9ca-ad0b948df4f0
nextflow.io/taskName: helloWorld1
name: nf-878eb722258681cf0031aeeabe2fb132
namespace: mynamespace
resourceVersion: "6387865826"
uid: 6a4ce939-cc01-4298-9d94-84219d750e84
spec:
backoffLimit: 0
completionMode: NonIndexed
completions: 1
parallelism: 1
selector:
matchLabels:
controller-uid: 6a4ce939-cc01-4298-9d94-84219d750e84
suspend: false
template:
metadata:
creationTimestamp: null
labels:
controller-uid: 6a4ce939-cc01-4298-9d94-84219d750e84
job-name: nf-878eb722258681cf0031aeeabe2fb132
spec:
containers:
- args:
- /usr/bin/fusion
- bash
- /fusion/s3/mybucket/nextflow/87/8eb722258681cf0031aeeabe2fb132/.command.run
env:
- name: FUSION_WORK
value: /fusion/s3/mybucket/nextflow/87/8eb722258681cf0031aeeabe2fb132
- name: AWS_S3_ENDPOINT
value: custom.ceph.endpoint
- name: FUSION_TAGS
value: '[.command.*|.exitcode|.fusion.*](nextflow.io/metadata=true),[*](nextflow.io/temporary=true)'
image: wave.seqera.io/wt/8602f8b269fe/library/ubuntu:latest
imagePullPolicy: Always
name: nf-878eb722258681cf0031aeeabe2fb132
resources:
limits:
ephemeral-storage: 1Gi
memory: 1Gi
requests:
cpu: "1"
ephemeral-storage: 1Gi
memory: 1Gi
securityContext:
privileged: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Never
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
ttlSecondsAfterFinished: 604800
status:
ready: 0
startTime: "2023-09-12T17:16:58Z"
uncountedTerminatedPods: {}
``
[1]: https://github.com/actions/actions-runner-controller/issues/792
[2]: https://github.com/actions/actions-runner-controller/pull/1383
</code></pre>
| David Parks | <p>In the <a href="https://www.nextflow.io/docs/latest/executor.html#k8s-executor" rel="nofollow noreferrer">Nextflow's Kubernetes executor</a>, you have the ability to control the security context of the pods that are created.</p>
<p>You need to correctly specify the security context in such a way that it applies to the correct level within your Kubernetes job specification. It appears that Nextflow is <em>not</em> applying the <code>securityContext</code> to the individual containers within the pods based on the settings in your <code>nextflow.config</code>.</p>
<p>You can try and utilize the <a href="https://www.nextflow.io/docs/latest/process.html#process-pod" rel="nofollow noreferrer"><code>pod</code> directive</a> within your Nextflow process definitions to specify custom pod settings. You would specify the security context within the pod directive using Groovy syntax to create a YAML snippet that represents the correct Kubernetes configuration.</p>
<pre><code>process {
...
pod = [
'spec': [
'containers': [[
'name': 'main',
'securityContext': [
'privileged': false
]
]]
]
]
}
</code></pre>
<p>You can see the <code>pod</code> directive, used to define a custom pod configuration, using a map of configurations (and not a YAML multi-line string) to represent the YAML configuration for the Kubernetes pod. We specify a <code>securityContext</code> at the pod level with <code>runAsUser: 1000</code> to dictate the user ID that the process will run as (replace <code>1000</code> with an appropriate user ID for your environment).</p>
<p>Within the <code>containers</code> list, we specify a container named "main" and set <code>privileged: false</code> in its <code>securityContext</code> to disable privileged mode for that container.</p>
<p>That configuration should override the defaults applied by Nextflow and allow your job to run without privileged mode.</p>
| VonC |
<p>I am trying to understand the Gitlab K8s agent. But it looks like it requires the developer to commit changes to a manifest file before it can deploy them to K8s. This is a challenge when trying to do auto deploys using Gitlab pipelines because those pipelines run after the commit. So how is the user supposed to create a new commit in an automated way that the Gitlab K8s agent can pick up?</p>
<p>I am wondering if anyone is using Gitlab and their K8s agent for auto deploying to K8s. Would really appreciate if you could throw some light on this.</p>
| crossvalidator | <p>Note that the traditional <a href="https://docs.gitlab.com/ee/user/clusters/agent/gitops/agent.html" rel="nofollow noreferrer">GitOps with the agent for Kubernetes</a> has been deprecated with GitLab 16.2 (July 2023), and <a href="https://docs.gitlab.com/ee/user/clusters/agent/gitops.html" rel="nofollow noreferrer">replaced with Flux</a>, as shown <a href="https://docs.gitlab.com/ee/user/clusters/agent/gitops/flux_tutorial.html" rel="nofollow noreferrer">in this tutorial</a>.</p>
<p>But still, a traditional approach can still work, with a bridge between the CI/CD pipeline and the GitOps workflow:</p>
<p>As part of your CI/CD pipeline, whenever you merge a project file, create or update the manifest files accordingly. That could be based on templates, dynamic values, or any other logic appropriate for your application.</p>
<p>Once the manifests are generated or updated, you should commit them back to the repository. That step is crucial because the GitLab Kubernetes Agent is watching the repository for changes in manifest files.<br />
You should add <code>[skip ci]</code> to the commit messages for auto-commits to prevent infinite loops of CI/CD pipeline runs.</p>
<p>For instance:</p>
<pre class="lang-yaml prettyprint-override"><code>stages:
- build
- deploy
build:
stage: build
script:
- echo "Build steps for your application"
# Potentially other build steps ...
generate_and_commit_manifests:
stage: deploy
script:
- echo "Generating/Updating manifests for $CI_COMMIT_SHORT_SHA"
- ./generate-or-update-manifests.sh $CI_COMMIT_SHORT_SHA
- git add path/to/manifests/*
- git commit -m "Auto-update manifests for $CI_COMMIT_SHORT_SHA [skip ci]"
- git push origin $CI_COMMIT_REF_NAME
deploy_stage:
stage: deploy
environment:
name: stage
script:
- echo "That is where you would traditionally deploy to staging, but it is now handled by the GitOps process"
when: manual
deploy_production:
stage: deploy
environment:
name: production
script:
- echo "That is where you would traditionally deploy to production, but it is now handled by the GitOps process"
when: manual
</code></pre>
<p>The build job will run, building your application.</p>
<p>The <code>generate_and_commit_manifests</code> job will run, updating the manifest files as necessary and committing them back to the repository. That commit will be noticed by the GitLab Kubernetes Agent, which will then apply the manifests to your Kubernetes cluster.</p>
<p>The <code>deploy_stage</code> and <code>deploy_production</code> jobs will be available to run manually, but they are more symbolic in this setup, since the actual deployment is handled by the GitLab Kubernetes Agent.</p>
<p>With this strategy, you are making use of both traditional CI/CD and GitOps workflows. The actual application build and preparation happens in the CI/CD pipeline, while the deployment is managed in a GitOps manner by the GitLab Kubernetes Agent.</p>
<hr />
<blockquote>
<p>On the other hand, as of today, this is what we are doing:</p>
<p>Let's suppose we need to merge a code change, by code change, I mean some feature code, as if we need to change <code>Payment.java</code> or <code>Payment.py</code> file, some project related code.</p>
<p>As of today, imagine an angry manager, who would force us:</p>
<blockquote>
<p>"for each and every comit/merge on a code file, you also need to manually change the <code>kubernetes/staging/payment-staging-manifest.yml</code>".</p>
</blockquote>
<p>He would also say:</p>
<blockquote>
<p>"if I see a commit, merge, without the manual change, fired!"</p>
</blockquote>
</blockquote>
<p>Your hypothetical scenario with the "angry manager" makes the challenge clearer: essentially, your team is being asked to ensure that every commit or merge involving a code change must <em>also</em> involve a corresponding change to the Kubernetes manifest file, or else... there are consequences.</p>
<p>That requirement calls for automation, as the ideal way to address this requirement without increasing the manual overhead for developers.</p>
<p>A first good practice is to embed the version/commit info in the application: Each time the code is built (e.g., during a CI job), embed the commit hash or another unique identifier in the application. That could be an environment variable, a file in the build, etc.</p>
<p>Then, after the application is built and before it is deployed, have a step in your CI/CD pipeline that automatically updates the Kubernetes manifest to use this new version.</p>
<ul>
<li>For instance, if you are using Docker and Kubernetes, every build of your application could produce a new Docker image with a tag corresponding to the commit hash.</li>
<li>Your CI/CD pipeline would then automatically update the manifest to use this new image.</li>
</ul>
<p>And once the manifest is updated, auto-commit and push this change back to your repository. Ensure you use a commit message tag like <code>[skip ci]</code> to prevent infinite CI loops.</p>
<p>Since the GitLab Kubernetes Agent is already watching for changes in the manifest, it will pick up this auto-committed change and deploy it.</p>
<p>Something like:</p>
<pre class="lang-yaml prettyprint-override"><code>update-manifest:
stage: deploy
script:
- COMMIT_HASH=$(git rev-parse --short HEAD)
- docker build -t myapp:$COMMIT_HASH .
- docker push myapp:$COMMIT_HASH
- sed -i "s/myapp:[a-z0-9]*/myapp:$COMMIT_HASH/" kubernetes/staging/payment-staging-manifest.yml
- git add kubernetes/staging/payment-staging-manifest.yml
- git commit -m "Auto-update manifest with image myapp:$COMMIT_HASH [skip ci]"
- git push origin $CI_COMMIT_REF_NAME
</code></pre>
<p>By using this approach, developers can continue their workflow of just committing code changes. The CI/CD pipeline handles the requirement of updating the Kubernetes manifest for each code change, ensuring you stay in the "good books" of the HAM (Hypothetical Angry Manager).</p>
<hr />
<blockquote>
<p>Some kind of:</p>
<pre class="lang-yaml prettyprint-override"><code>deploy_stage:
stage: deploy
environment:
name: stage
script:
- trigger the already configured gitlab kubernetes agent here without changing the manifest file.
</code></pre>
</blockquote>
<p>I understand the need: you want to trigger a deployment in the GitLab Kubernetes Agent without having to modify a manifest file or make any other changes in your repository.</p>
<p>I do not remember a GitLab Kubernetes Agent with an out-of-the-box "trigger" command or API call, as its design philosophy revolves around watching Git repositories for changes ("pull").<br />
Still, as a workaround, you can use a GitLab CI/CD pipeline to force a synchronization.</p>
<p>Rather than making "dummy" changes to the manifest file, maintain an environment variable within the Kubernetes manifest that is set to be the commit hash (or some other unique value) of the pipeline.</p>
<p>Then modify the environment variable in the manifest file with the commit hash or a timestamp during the CI/CD pipeline run.<br />
Commit this change and push it back to the repository (again, use <code>[skip ci]</code> in the commit message to prevent an endless loop of pipelines). That ensures that the agent sees the change and gets "triggered".</p>
<p>Something like:</p>
<pre class="lang-yaml prettyprint-override"><code>deploy_stage:
stage: deploy
environment:
name: stage
script:
- COMMIT_HASH=$(git rev-parse --short HEAD)
- sed -i "s/COMMIT_ENV_VAR=.*/COMMIT_ENV_VAR=$COMMIT_HASH/" path/to/manifest.yml
- git config user.email "[email protected]"
- git config user.name "GitLab CI"
- git add path/to/manifest.yml
- git commit -m "Update manifest with commit hash [skip ci]"
- git push origin $CI_COMMIT_REF_NAME
</code></pre>
<p>The downside is that you are still making a change to the repo, but it is now a meaningful change: recording the commit hash of the build triggering the deployment, which can be helpful for tracking purposes.</p>
| VonC |
<p>We have been happily using ArgoCD with public repositories for a while, but we've run into problems trying to connect ArgoCD to a private repository. We have an <code>Application</code> that looks like this:</p>
<pre><code>apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: example-app
namespace: argocd
spec:
destination:
name: example-cluster
namespace: open-cluster-management-agent
project: ops
source:
path: .
repoURL: ssh://[email protected]/example-org/example-repo.git
targetRevision: HEAD
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- Validate=false
- ApplyOutOfSyncOnly=true
</code></pre>
<p>And a corresponding secret with the secret key that looks like this:</p>
<pre><code>apiVersion: v1
metadata:
labels:
argocd.argoproj.io/secret-type: repository
name: example-repo
namespace: argocd
type: Opaque
stringData:
sshPrivateKey: |
-----BEGIN OPENSSH PRIVATE KEY-----
...
-----END OPENSSH PRIVATE KEY-----
url: ssh://[email protected]/example-org/example-repo.git
kind: Secret
</code></pre>
<p>ArgoCD fails to sync this app and shows the following error:</p>
<pre><code>rpc error: code = Unknown desc = error creating SSH agent: "SSH agent
requested but SSH_AUTH_SOCK not-specified"
</code></pre>
<p>I understand what that's telling me, but I'm not sure how to correct
the problem in the context of ArgoCD.</p>
| larsks | <p>You can review possible causes from <a href="https://github.com/argoproj/argo-cd/issues/1172" rel="nofollow noreferrer"><code>argoproj/argo-cd</code> issue 1172</a></p>
<p>Since you have the proper URL (<code>ssh://[email protected]/...</code>, as requested by <a href="https://github.com/operate-first/argocd-apps/pull/203" rel="nofollow noreferrer">PR 203</a>), double-check the indentation:</p>
<blockquote>
<p>It was an identation problem caused by me :) in the <code>repositories.yaml</code> causing a wrong unmarshalling and a empty <code>sshPrivateKeySecret</code>.<br />
The right version is here:</p>
<pre class="lang-yaml prettyprint-override"><code>- sshPrivateKeySecret:
key: sshPrivateKey
name: bitbucket
url: [email protected]:MY-PROJECT/MY-REPO
</code></pre>
</blockquote>
| VonC |
<p>I am serving jupyter notebook through a Kubernetes cluster. And I've set <code>resources.limits</code> to prevent someone from draining all of the host servers memory. </p>
<p>While one problem is that the jupyter notebook kernels after crash and automatic restart they do not throw any OOM errors after the container exceeds the memory, which will make the user very confused. </p>
<p>So how can I make the jupyter notebook raise the OOM error when running with Kubernetes? </p>
| aisensiy | <p>If you have only one specific pod, you can monitor the events/logs, <a href="https://stackoverflow.com/a/38270491/6309">as in here</a>:</p>
<pre><code>kubectl get events --watch
kubectl logs -f podname
</code></pre>
<p>That being said, not <em>all</em> events in a <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/" rel="nofollow noreferrer">pod lifecycle</a> are properly reported, as shown in <a href="https://github.com/kubernetes/kubernetes/issues/38532#issuecomment-282154320" rel="nofollow noreferrer"><code>kubernetes/kubernetes</code> issue 38532</a> and the (abandoned) <a href="https://github.com/kubernetes/kubernetes/pull/45682" rel="nofollow noreferrer">PR 45682</a>.<br>
But you should still see <code>OOMKilled:true</code> when <code>docker inspect</code>'ing the pod.</p>
| VonC |
<p>I know continuous deployment can be done via spinnaker like Blue/Green, Canary Deployment etc
but my question how does it differ from other GitOps tools like WeaveWorks Flux ?
Does both solve the same problem ? or Do we need both tools in conjunction ? </p>
| srikanth | <p>As mentioned in "<a href="https://www.weave.works/blog/the-gitops-pipeline" rel="nofollow noreferrer">The GitOps Pipeline - Part 2</a>"</p>
<blockquote>
<p><strong><a href="https://www.weave.works/oss/flux/" rel="nofollow noreferrer">Weave Flux</a></strong> is a tool that automates the staging and release of containers to Kubernetes, as well as services, deployments, network policies and even Istio routing rules. Flux is therefore ‘glue’ that maps all the relationships between code, services and running clusters.</p>
<p><strong>It can be seen as a leaner, Kubernetes-native alternative to <a href="http://blog.armory.io/what-is-immutable-infrastructure/" rel="nofollow noreferrer">Spinnaker</a></strong>.</p>
</blockquote>
| VonC |
<p>If there is an update in the docker image, rolling update strategy will update all the pods one by one in a daemonset, similarly is it possible to restart the pods gracefully without any changes the daemonset config or can it be triggered explicitly?</p>
<p>Currently, I am doing it manually by</p>
<p><code>kubectl delete pod <pod-name></code>
One by one until each pod gets into running state.</p>
| zillani | <p>You could try and use <a href="https://kubernetes.io/docs/tasks/administer-cluster/cluster-management/#maintenance-on-a-node" rel="nofollow noreferrer">Node maintenance operations</a>:</p>
<blockquote>
<p>Use kubectl drain to <strong>gracefully terminate all pods on the node</strong> while marking the node as unschedulable (with <a href="https://stackoverflow.com/a/48078350/6309"><code>--ignore-daemonsets</code></a>, from <a href="https://stackoverflow.com/users/9065705/konstantin-vustin">Konstantin Vustin</a>'s <a href="https://stackoverflow.com/questions/52866960/kubernetes-how-to-gracefully-delete-pods-in-daemonset#comment92649044_52867165">comment</a>):</p>
</blockquote>
<pre><code>kubectl drain $NODENAME --ignore-daemonsets
</code></pre>
<blockquote>
<p>This keeps new pods from landing on the node while you are trying to get them off.</p>
</blockquote>
<p>Then:</p>
<blockquote>
<p>Make the node schedulable again:</p>
</blockquote>
<pre><code>kubectl uncordon $NODENAME
</code></pre>
| VonC |
<p>I'm trying to install Gitlab Runner inside my cluster in Azure Kubernetes Service (AKS), but I have 2 errors:</p>
<ol>
<li><p>Helm Tiller doesn't appear in the application list of Gitlab CI:
Most of tutorials tell that it has to be installed, but today it is not even proposed as you can see here:
<a href="https://i.stack.imgur.com/FJRcb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FJRcb.png" alt="enter image description here" /></a></p>
</li>
<li><p>When I install gitlab-runner from this list, I have a message error like
"Something went wrong while installing Gitlab Runner
Operation failed. Check pod logs for install-runner for more details"
So when I check the logs, I have this:
<a href="https://i.stack.imgur.com/wbI0K.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wbI0K.png" alt="enter image description here" /></a></p>
</li>
</ol>
<p>The 2 last lines there is an error, some answers tell that I need to change the repo with Helm command, so I do that from the Azure CLI bash in the portal, but I still have the same error, I execute the code like this :</p>
<pre><code>helm repo rm stable
helm repo add stable https://charts.helm.sh/stable
</code></pre>
<p>And then I update, do I need to give more arguments in commands?</p>
| Elias Arellano | <p>GitLab 13.12 (May 2021) does clearly mention:</p>
<blockquote>
<h2><a href="https://about.gitlab.com/releases/2021/05/22/gitlab-13-12-released/#helm-v2-support" rel="nofollow noreferrer">Helm v2 support</a></h2>
</blockquote>
<blockquote>
<p>Helm v2 was <a href="https://helm.sh/blog/helm-v2-deprecation-timeline/" rel="nofollow noreferrer">officially deprecated</a> in November of 2020, with the <code>stable</code> repository being <a href="https://about.gitlab.com/blog/2020/11/09/ensure-auto-devops-work-after-helm-stable-repo/" rel="nofollow noreferrer">de-listed from the Helm Hub shortly thereafter</a>.</p>
<p><strong>With the release of GitLab 14.0 (June 2021), which will include the 5.0 release of the <a href="https://docs.gitlab.com/charts/" rel="nofollow noreferrer">GitLab Helm chart</a>, Helm v2 will no longer be supported.</strong></p>
<p>Users of the chart should <a href="https://helm.sh/docs/topics/v2_v3_migration/" rel="nofollow noreferrer">upgrade to Helm v3</a> to deploy GitLab 14.0 and above.</p>
</blockquote>
<p>So that is why Helm Tiller doesn't appear in the application list of Gitlab CI.</p>
| VonC |
<p>I am able to install kubernetes using kubeadm method successfully. My environment is behind a proxy. I applied proxy to system, docker and I am able to pull images from Docker Hub without any issues. But at the last step where we have to install the pod network (like weave or flannel), its not able to connect via proxy. It gives a time out error. I am just checking to know if there is any command like curl -x http:// command for kubectl apply -f? Until I perform this step it says the master is NotReady.</p>
| ravi karthik bodicherla | <p>When you do work with a proxy for internet access, do not forget to configure the <code>NO_PROXY</code> environment variable, in addition of <code>HTTP(S)_PROXY</code>.</p>
<p>See <a href="https://docs.openshift.com/container-platform/3.4/install_config/http_proxies.html#configuring-no-proxy" rel="nofollow noreferrer">this example</a>:</p>
<blockquote>
<p>NO_PROXY accepts a comma-separated list of hosts, IP addresses, or IP ranges in <a href="https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing" rel="nofollow noreferrer">CIDR</a> format:</p>
<p>For master hosts</p>
<ul>
<li>Node host name</li>
<li>Master IP or host name</li>
</ul>
<p>For node hosts</p>
<ul>
<li>Master IP or host name</li>
</ul>
<p>For the Docker service</p>
<ul>
<li>Registry service IP and host name</li>
</ul>
</blockquote>
<p>See also for instance <a href="https://github.com/weaveworks/scope/issues/2246#issuecomment-281712035" rel="nofollow noreferrer">weaveworks/scope issue 2246</a>.</p>
| VonC |
<p>Imagine you want to get kind/struct called <code>KubeadmControlPlane</code> from the kubernetes API server.</p>
<p>This means you need to import the related struct into your code.</p>
<p>A matching import statement for <code>KubeadmControlPlane</code> would be:</p>
<blockquote>
<p>kubeadm "sigs.k8s.io/cluster-api/controlplane/kubeadm/api/v1beta1"</p>
</blockquote>
<p>Up to now I need to much time to find a matching import statement.</p>
<p>I use vscode.</p>
<p>How do you get from the CRD kind to an import statement?</p>
| guettli | <p>In principle, a <code>go get sigs.k8s.io/[email protected]</code> (done in the folder where your <code>go.mod</code> is) should be enough to:</p>
<ul>
<li>update your <code>go.mod</code>,</li>
<li>add the library in your <code>$GOPATH</code> and</li>
<li>enable VSCode auto-import to work.</li>
</ul>
<p>That means, when you start typing the name of a struct, like <code>KubeadmControlPlane</code>, the <a href="https://code.visualstudio.com/docs/languages/go" rel="nofollow noreferrer">VSCode Go extension</a> should suggest an auto-import if it can find a matching package in your <code>GOPATH</code> or in your project's vendor directory.</p>
<hr />
<p>If not, the manual process would be:</p>
<ol>
<li><p><strong>Identify the API Group and Version of the CRD:</strong> This information is usually found in the <code>apiVersion</code> field of the CRD YAML file. For example, the <code>KubeadmControlPlane</code> is part of the <code>controlplane.cluster.x-k8s.io/v1beta1</code> API group and version.</p>
</li>
<li><p><strong>Find the Go Package for the API Group:</strong> You need to find the corresponding Go package for this API group.<br />
In the case of the <code>KubeadmControlPlane</code>, it is part of the <code>sigs.k8s.io/cluster-api</code> project and the specific package path is <code>sigs.k8s.io/cluster-api/controlplane/kubeadm/api/v1beta1</code>.<br />
A <a href="https://pkg.go.dev/search?q=KubeadmControlPlane" rel="nofollow noreferrer">search in <code>pkg.go.dev</code></a> works too, pending an official API to lookup packages (<a href="https://github.com/golang/go/issues/36785" rel="nofollow noreferrer">issue 36785</a>).</p>
</li>
<li><p><strong>Identify the Go Struct for the CRD:</strong> The Go struct is usually named similarly to the Kind of the CRD. In this case, it is <code>KubeadmControlPlane</code>.</p>
</li>
<li><p><strong>Create the Go Import Statement:</strong> Once you have the package path and struct name, you can create the Go import statement. For example:</p>
</li>
</ol>
<pre class="lang-golang prettyprint-override"><code>import (
kubeadm "sigs.k8s.io/cluster-api/controlplane/kubeadm/api/v1beta1"
)
</code></pre>
| VonC |
<p>I have a war file that I deployed with Tomcat.<br />
I have a Dockerfile and at the end I build a kubernetes pod.</p>
<p>The problem is that if my property files from my app do exist in the path: <code>/usr/local/tomcat/webapps/myapp/WEB-INF/classes/config/</code> and not in path <code>/usr/local/tomcat/webapps/myapp/WEB-INF/classes/</code>, so the application does not start.</p>
<p>Is it possible to set a classpath in Tomcat to point to a specific folder?<br />
For example, I want to set classpath like: <code>/usr/local/tomcat/webapps/myapp/WEB-INF/classes/config/</code>.<br />
I don't want to have duplicate property files.</p>
| Eugen Gîrlescu | <p>As <a href="https://stackoverflow.com/a/2161583/6309">mentioned here</a>:</p>
<blockquote>
<p><code>foo.properties</code> is supposed to be placed in one of the roots which are covered by the default classpath of a webapp, e.g. webapp's <code>/WEB-INF/lib</code> and /WEB-INF/classes, server's /lib, or JDK/JRE's /lib.</p>
<ul>
<li>If the properties file is webapp-specific, best is to place it in <code>/WEB-INF/classes</code>.</li>
<li>If you're developing a standard WAR project in an IDE, drop it in <code>src</code> folder (the project's source folder).</li>
<li>If you're using a Maven project, drop it in <code>/main/resources</code> folder.</li>
</ul>
<p>You can alternatively also put it somewhere outside the default classpath and add its path to the classpath of the appserver.<br />
<strong>In for example Tomcat you can configure it as <code>shared.loader</code> property of <code>Tomcat/conf/catalina.properties</code>.</strong></p>
</blockquote>
<p><a href="https://gist.github.com/ghusta/12b50687a39bd02a88680df450a840f4" rel="nofollow noreferrer">Example</a>:</p>
<pre><code>FROM tomcat:8.5-jre8
# $CATALINA_HOME is defined in tomcat image
ADD target/my-webapp*.war $CATALINA_HOME/webapps/my-webapp.war
# Application config
RUN mkdir $CATALINA_HOME/app_conf/
ADD src/main/config/test.properties $CATALINA_HOME/app_conf/
# Modify property 'shared.loader' in catalina.properties
RUN sed -i -e 's/^shared.loader=$/shared.loader="${catalina.base} \/ app_conf"/' $CATALINA_HOME/conf/catalina.properties
</code></pre>
| VonC |
<p>Here's the full error <code>Unable to connect to the server: dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused it.</code></p>
<p>Here's my Kubectl config view</p>
<pre><code>apiVersion: v1
clusters: []
contexts:
- context:
cluster: ""
user: ""
name: dev
current-context: dev
kind: Config
preferences: {}
users: []
</code></pre>
<p>I'm running <code>Minikube start</code>. It's stuck on <code>Starting VM...</code></p>
<p>In Hyper-V Manager, I have minikube VM running. </p>
| Alamgir Qazi | <p>Check out "<a href="https://medium.com/@JockDaRock/minikube-on-windows-10-with-hyper-v-6ef0f4dc158c" rel="nofollow noreferrer">Minikube on Windows 10 with Hyper-V</a>" by <a href="https://twitter.com/JockDaRock" rel="nofollow noreferrer">Jock Reed</a></p>
<p>The command to run, from a Windows CMD console as Administrator, is:</p>
<pre><code>minikube start --vm-driver hyperv --hyperv-virtual-switch "Primary Virtual Switch"
</code></pre>
<p>With "<code>Primary Virtual Switch</code>" being the name of the new "External" "Virtual network switch" you have created first.</p>
<p>Don't forget to turn off Dynamic Memory for the minikube VM (<a href="https://github.com/kubernetes/minikube/issues/2326" rel="nofollow noreferrer">minikube issue 2326</a>)</p>
<p>And possibly, <a href="https://medium.com/@JockDaRock/disabling-ipv6-on-network-adapter-windows-10-5fad010bca75" rel="nofollow noreferrer">disable IPv6 on Network Adapter Windows 10</a> (<a href="https://github.com/kubernetes/minikube/issues/754#issuecomment-340315883" rel="nofollow noreferrer">issue 754</a></p>
<p>Make sure to use the <a href="https://github.com/kubernetes/minikube/releases/download/v0.28.0/minikube-windows-amd64" rel="nofollow noreferrer"><code>v0.28.0/minikube-windows-amd64</code></a> executable, as mentioned in <a href="https://github.com/kubernetes/minikube/issues/1943#issuecomment-332083151" rel="nofollow noreferrer">issue 1943</a>.</p>
| VonC |
<p>I have Gitlab Kubernetes integration in my <code>Project 1</code> and I am able to access the <code>kube-context</code> within that project's pipelines without any issues.</p>
<p>I have another project, <code>Project 2</code> where I want to use the same cluster that I integrated in my <code>Project 1</code>.</p>
<p>This i my agent config file:</p>
<pre><code># .gitlab/agents/my-agent/config.yaml
ci_access:
projects:
- id: group/project-2
</code></pre>
<p>When I try to add a Kubernetes cluster in my <code>Project 2</code>, I am expecting to see the cluster name that I set up for <code>Project 1</code> in the dropdown, but I don't see it:</p>
<p><a href="https://i.stack.imgur.com/LYZfr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LYZfr.png" alt="gitlab-agent" /></a></p>
| HelmBurger | <p><a href="https://about.gitlab.com/releases/2022/12/22/gitlab-15-7-released/#share-cicd-access-to-the-agent-within-a-personal-namespace" rel="nofollow noreferrer">GitLab 15.7</a> (December 2022) suggests an alternative approach, which does not involve creating a new agent per project</p>
<blockquote>
<h2>Share CI/CD access to the agent within a personal namespace</h2>
<p>The GitLab agent for Kubernetes provides a more secure solution for managing your clusters with GitLab CI/CD.
You can use a single agent with multiple projects and groups by sharing access
to the agent connection. In previous releases, you could not share access with personal
namespaces. This release adds support for CI/CD connection sharing to personal namespaces.
You can now use a single agent from any of the projects under your personal namespace.</p>
<p><a href="https://i.stack.imgur.com/lGha6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lGha6.png" alt="https://about.gitlab.com/images/15_7/configure-allow-agent-cicd-access-sharing-within-a-personal-namesp.png -- Share CI/CD access to the agent within a personal namespace" /></a></p>
<p>See <a href="https://docs.gitlab.com/ee/user/clusters/agent/ci_cd_workflow.html#authorize-the-agent" rel="nofollow noreferrer">Documentation</a> and <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/356831" rel="nofollow noreferrer">Issue</a>.</p>
</blockquote>
<p>That means from a single agent, you should be able to access an agent connection from multiple projects (personal namespace or not).</p>
| VonC |
<p>I need an Akka cluster to run multiple CPU intensive jobs. I cannot predict how much CPU power I need. Sometimes load is high, while at other times, there isn't much load. I guess autoscaling is a good option, which means, example: I should be able to specify that I need minimum 2 and maximum 10 Actors. The cluster should scale up or down along with a cool off period as load goes up or down. Is there a way to do that?
I am guessing, maybe one can make an Docker image of the codebase, and autoscale it using Kubernetes. Is it possible? Is there a native Akka solution?
Thanks</p>
| Apurva Singh | <p>If you consider a <a href="https://github.com/hseeberger/constructr" rel="nofollow noreferrer">project like <code>hseeberger/constructr</code></a> and its <a href="https://github.com/hseeberger/constructr/issues/179" rel="nofollow noreferrer">issue 179</a>, a native Akka solution should be based on <a href="https://github.com/akka/akka-management" rel="nofollow noreferrer"><code>akka/akka-management</code></a>:</p>
<blockquote>
<p>This repository contains interfaces to inspect, interact and manage various Parts of Akka, primarily Akka Cluster. Future additions may extend these concepts to other parts of Akka.</p>
</blockquote>
<p>There is a <a href="https://github.com/akka/akka-management/tree/master/bootstrap-joining-demo/kubernetes-api" rel="nofollow noreferrer">demo for kubernetes</a>.</p>
| VonC |
<p>I developed service which listen 0.0.0.0:8080. When I run app local it works and i can connect by browser. I push it in image to dockerhub. By this image I created pod and service in my mini kube cluster. By command "minicube service --url" i get url like 192.168.49.2:30965, but I can't connect to this url. I tried with curl and with browser.</p>
<p>I tried connecting with curl and with browser. In browser i got</p>
<blockquote>
<p>"ERR_CONNECTION_TIME_OUT".</p>
</blockquote>
<p>Curl in shell get error too.</p>
<p>When I use command "minicube service --url" i got this message</p>
<pre><code>! Executing "docker container inspect minikube --format={{.State.Status}}" took an unusually long time: 2.3171367s
* Restarting the docker service may improve performance.
http://192.168.49.2:30965
* Starting tunnel for service <name>.
! Because you are using a Docker driver on windows, the terminal needs to be open to run it.
</code></pre>
| Яков Грищенко | <p>You would need, as <a href="https://stackoverflow.com/q/71175415/6309">illustrated here</a> to:</p>
<ul>
<li>create a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a> and apply it</li>
<li>then expose the port:<br />
<code>kubectl expose -f laravel-deployment.yaml --port=8000 --target-port=8000</code></li>
<li>And port-forward it:<br />
<code>kubectl port-forward <backend-deployment-xxxx-xxxx> 8000:8000</code></li>
</ul>
<p>That last step would allow you to access http://localhost:8000, instead of relying on the dynamic minikube IP.</p>
| VonC |
<p>I have search the Helm documentation and this forum as well as others and see no way to print out or list the environment variables that Helm uses... In the docs it states that you can set env vars with override flags but I see no instructions to list what (if any) environmental vars Helm uses ... </p>
<p>I was thinking something like printenv or echo ${HELM_HOME} or echo $(HELM_HOME)... </p>
<p>Thank you. </p>
| Jim_Brent | <p>The support for environment variables was initially discussed in <a href="https://github.com/kubernetes/helm/issues/944" rel="nofollow noreferrer">helm issue 944</a>, and implemented in <a href="https://github.com/kubernetes/helm/pull/982" rel="nofollow noreferrer">PR 982</a> for Helm 2.0 in July 2016.</p>
<p><a href="https://github.com/technosophos/k8s-helm/blob/4b6fbbb67f32d10d23696f2a51b36684579fa763/cmd/helm/install.go#L41-L52" rel="nofollow noreferrer">As documented</a></p>
<blockquote>
<p>To override values in a chart, use either the '<code>--values</code>' flag and pass in a file
or use the '<code>--set</code>' flag and pass configuration from the command line.</p>
<pre><code>$ helm install -f myvalues.yaml redis
</code></pre>
<p>or </p>
<pre><code>$ helm install --set name=prod redis
</code></pre>
<p>To check the generated manifests of a release without installing the chart,
the '<code>--debug</code>' and '<code>--dry-run</code>' flags can be combined.<br>
This will still require a round-trip to the Tiller server.</p>
</blockquote>
<p>The last part should at least allow you to check the generated manifests of a release, which should include environment variables.</p>
<p><a href="https://github.com/technosophos/k8s-helm/blob/4b6fbbb67f32d10d23696f2a51b36684579fa763/cmd/helm/install.go#L150-L169" rel="nofollow noreferrer"><code>install.go</code></a> offers a method <code>(v *values) Set(data string)</code>: a setter... but no getter, beside a <code>String()</code> method.</p>
| VonC |
<p>Popular version control servers (like github) are likely having an immense amount of traffic and need a scalable & durable data storage. I was wondering how is this implemented in the background. </p>
<p>I have few guesses/assumptions on how it works but I'm not sure if they are 100% accurate:</p>
<ul>
<li>Repositories are probably stored on disk instead of some database solution (because git server is already self sufficient AFAIK)</li>
<li>A single host to serve the entire traffic is probably not enough, so some load balancing is needed</li>
<li>Since multiple servers are needed, each having their own storage, there is no point in keeping all repositories in all servers. (So I would expect each repository to be mapped to a host)</li>
<li>For reliability, probably servers are not running on single hosts but rather on a cluster of replicates that are actually synced (maybe using kubernetes etc) and these are probably backed up periodically along with database backups.</li>
<li>There probably is a main load balancer application that redirects the request to appropriate cluster (so it knows which repository is mapped to which cluster)</li>
</ul>
<p>One other possibility is just storing the entire <code>.git</code> in a database as blob and have a scalable stateless application fetch that <code>.git</code> for each request, do operations, store the result again and the send response however this is probably a really inefficient solution so I thought it is unlikely to be the underlying mechanism. </p>
<p>So my main questions are: </p>
<ul>
<li>Do assumptions above make sense / are they accurate? </li>
<li>How would one implement a load balancer application that all git requests are directed to the appropriate cluster? (eg. would mapping repositories with cluster id&ips, storing this in a database, and putting up a nodejs application that redirects the incoming requests to matching cluster ip work?)</li>
<li>How would one go about implementing a git server that scales in general if above is inaccurate? (in case there are any better approaches)</li>
</ul>
| ozgeneral | <p>No need to rely on guesses.</p>
<p>For GitHub specifically, the <a href="https://githubengineering.com/" rel="noreferrer">githubengineering blog</a> details what they had to use in order to scale to their current usage level.</p>
<p>Beside upgrading Rails or removing JQuery, on the frontend side, they have:</p>
<ul>
<li><a href="https://github.blog/2018-08-08-glb-director-open-source-load-balancer/" rel="noreferrer">GLB: GitHub’s open source load balancer</a>: At GitHub, we serve tens of thousands of requests every second out of our network edge, operating on <a href="http://githubengineering.com/githubs-metal-cloud/" rel="noreferrer">GitHub’s metal cloud</a>. </li>
<li><a href="https://github.blog/2018-06-20-mysql-high-availability-at-github/" rel="noreferrer">MySQL High Availability at GitHub</a>: GitHub uses MySQL as its main datastore for all things non-git, and its availability is critical to GitHub’s operation. </li>
<li><a href="https://github.blog/2017-10-13-stretching-spokes/" rel="noreferrer">Stretching Spokes</a>: GitHub’s Spokes system stores multiple distributed copies of Git repositories. This article discusses how we got Spokes replication to span widely separated datacenters.</li>
</ul>
<p>Regarding Kubernetes:</p>
<ul>
<li>"<a href="https://github.blog/2017-08-16-kubernetes-at-github/" rel="noreferrer">Kubernetes at GitHub</a>" (2017)</li>
<li>"<a href="https://github.blog/2019-11-21-debugging-network-stalls-on-kubernetes/" rel="noreferrer">Debugging network stalls on Kubernetes </a>" (2019)</li>
</ul>
| VonC |
<p>I'm currently learning Kubernetes as part of a project and facing a small hurdle which I hope you guys can help me in crossing.</p>
<p>The ask is to build a docker application that can be accessed over the internet by anyone anywhere. Below are the steps I followed.</p>
<ol>
<li>I'm using Windows laptop</li>
<li>I used VMWare Workstation to install Ubuntu 20 LTS.</li>
<li>Inside Ubuntu, I've deployed my docker image - using ubuntu terminal</li>
<li>Currently, the applications are accessible within the ubuntu (using Localhost as well as, the URL generated by minikube (using the command minikube services <application_name> --url.</li>
<li>Since within Ubuntu my localhost is working I tried using the ip addr show to get my ubuntu's IP address and then tried accessing it from my windows machine and no result.</li>
</ol>
<p>Now I want to use the postman installed on my windows machine to hit the container that's running within ubuntu.</p>
<p>I'm new to this entire process so apologies if my question sounds dumb.</p>
| TechEnthu | <p>First, make sure your network mode for your VMWare is "bridge" (<a href="https://stackoverflow.com/a/33814957/6309">as in here, for VirtualBox</a>, but the same idea applies to VMWare Player)</p>
<p>Then you can use <strong><a href="https://ngrok.com/" rel="nofollow noreferrer">ngrok</a></strong> (as described in "<a href="https://medium.com/oracledevs/expose-docker-container-services-on-the-internet-using-the-ngrok-docker-image-3f1ea0f9c47a" rel="nofollow noreferrer">Expose Docker Container services on the Internet using the ngrok docker image</a>" from <strong>Lucas Jellema</strong>) to generates a public URL and ensures that all requests sent to that URL are forwarded to a local agent (running in its own, stand alone Docker container) that can then pass them on to the local service.</p>
| VonC |
<p>Microk8s is installed on default port 16443. I want to change it to 6443. I am using Ubuntu 16.04. I have installed microk8s using snapd and conjure-up.</p>
<p>None of the following options I have tried worked.</p>
<ol>
<li>Tried to edit the port in <code>/snap/microk8s/current/kubeproxy.config</code>. As the volume is read-only, I could not edit it.</li>
<li>Edited the <code>/home/user_name/.kube/config</code> and restarted the cluster.</li>
<li>Tried using the command and restarted the cluster
<code>sudo kubectl config set clusters.microk8s-cluster.server https://my_ip_address:6443</code>.</li>
<li>Tried to use <code>kubectl proxy --port=6443 --address=0.0.0.0 --accept-hosts=my_ip_address &</code>. It listens on 6443, but only HTTP, not HTTPS traffic.</li>
</ol>
| Srinivasa Rao | <p>That was initially resolved in <a href="https://github.com/ubuntu/microk8s/issues/43#issuecomment-434383633" rel="noreferrer">microk8s issue 43</a>, but detailed in <a href="https://github.com/ubuntu/microk8s/issues/300#issuecomment-476995716" rel="noreferrer">microk8s issue 300</a>:</p>
<blockquote>
<p>This is the right one to use for the latest microk8s:</p>
</blockquote>
<pre><code>#!/bin/bash
# define our new port number
API_PORT=8888
# update kube-apiserver args with the new port
# tell other services about the new port
sudo find /var/snap/microk8s/current/args -type f -exec sed -i "s/8080/$API_PORT/g" {} ';'
# create new, updated copies of our kubeconfig for kubelet and kubectl to use
mkdir -p ~/.kube && microk8s.config -l | sed "s/:8080/:$API_PORT/" | sudo tee /var/snap/microk8s/current/kubelet.config > ~/.kube/microk8s.config
# tell kubelet about the new kubeconfig
sudo sed -i 's#${SNAP}/configs/kubelet.config#${SNAP_DATA}/kubelet.config#' /var/snap/microk8s/current/args/kubelet
# disable and enable the microk8s snap to restart all services
sudo snap disable microk8s && sudo snap enable microk8s
</code></pre>
| VonC |
<p>I want my pods to be gracefully recycled from my deployments after certain period of time such as every week or month. I know I can add a cron job for that if I know the Kubernetes command. </p>
<p>The question is what is the best approach to do this in Kubernetes. Which command will let me achieve this goal?</p>
<p>Thank you very much for helping me out on this.</p>
| rayhan | <p>As the <a href="https://stackoverflow.com/users/1527879/rayhan">OP rayhan</a> has <a href="https://stackoverflow.com/a/53947838/6309">found out</a>, and as commented in <a href="https://github.com/kubernetes/kubernetes/issues/13488#issuecomment-446101458" rel="noreferrer"><code>kubernetes/kubernetes</code> issue 13488</a>, a kubectl patch of an environment variable is enough.</p>
<p>But... K8s 1.15 will bring <a href="https://github.com/kubernetes/kubernetes/issues/13488#issuecomment-481023838" rel="noreferrer"><code>kubectl rollout restart</code></a>... that is when <a href="https://github.com/kubernetes/kubernetes/pull/77423" rel="noreferrer">PR 77423</a> is accepted and merged.</p>
<blockquote>
<p><code>kubectl rollout restart</code> now works for daemonsets and statefulsets.</p>
</blockquote>
| VonC |
<p>I'm starting in Kubernetes and I'm trying to update the Image in DockerHub that is used for the Kubernetes's Pod creation and then with <code>kubectl rollout restart deployment deploymentName</code> command it should pull the newest image and rebuild the pods.
The problem I'm facing is that it only works when I specify a version in the tag both in the image and the deployment.yaml` file.</p>
<p>In my repo I have 2 images <code>fixit-server:latest</code> and <code>fixit-server:0.0.2</code> (the actual latest one).</p>
<p>With <code>deployment.yaml</code> file set as</p>
<pre><code>spec:
containers:
- name: fixit-server-container
image: vinnytwice/fixit-server
# imagePullPolicy: Never
resources:
limits:
memory: "128Mi"
cpu: "500m"
</code></pre>
<p>I run <code>kubectl apply -f infrastructure/k8s/server-deployment.yaml</code> and it gets created, but when running <code>kubectl get pods</code> I get</p>
<pre><code>vincenzocalia@vincenzos-MacBook-Air server-node % kubectl get pods
NAME READY STATUS RESTARTS AGE
fixit-server-5c7bfbc5b7-cgk24 0/1 ErrImagePull 0 7s
fixit-server-5c7bfbc5b7-g7f8x 0/1 ErrImagePull 0 7s
</code></pre>
<p>I then instead specify the version number in the <code>deployment.yaml</code> file</p>
<pre><code> spec:
containers:
- name: fixit-server-container
image: vinnytwice/fixit-server:0.0.2
# imagePullPolicy: Never
resources:
limits:
memory: "128Mi"
cpu: "500m"
</code></pre>
<p>run again <code>kubectl apply -f infrastructure/k8s/server-deployment.yaml</code> and get <code>configured</code> as expected.
Running <code>kubectl rollout restart deployment fixit-server</code> I get <code>restarted</code> as expected.
But still running <code>kubectl get pods</code> shows</p>
<pre><code>vincenzocalia@vincenzos-MacBook-Air server-node % kubectl get pods
NAME READY STATUS RESTARTS AGE
fixit-server-5c7bfbc5b7-cgk24 0/1 ImagePullBackOff 0 12m
fixit-server-5d78f8848c-bbxzx 0/1 ImagePullBackOff 0 2m58s
fixit-server-66cb98855c-mg2jn 0/1 ImagePullBackOff 0 74s
</code></pre>
<p>So I deleted the deployment and applied it again and pods are now running correctly.</p>
<p>Why when omitting a version number for the image to use ( which should imply :latest) the <code>:latest</code> tagged image doesn't get pulled from the repo?
What's the correct way of using the <code>:latest</code> tagged image?
Thank you very much.
Cheers</p>
<p>repo:
<a href="https://i.stack.imgur.com/Cz4Xz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Cz4Xz.png" alt="enter image description here" /></a></p>
<p>images:</p>
<pre><code>REPOSITORY TAG IMAGE ID CREATED SIZE
vinnytwice/fixit-server 0.0.2 53cac5b0a876 10 hours ago 1.3GB
vinnytwice/fixit-server latest 53cac5b0a876 10 hours ago 1.3GB
</code></pre>
| Vincenzo | <p>You can use <a href="https://github.com/ryandaniels/docker-script-find-latest-image-tag/blob/aed09da3813429cbe979e34a050a48c3b19a7002/docker_image_find_tag.sh" rel="nofollow noreferrer"><code>docker_image_find_tag.sh</code></a> to check if your image has a <code>latest</code> tag or not.<br />
It will show the tag/version for shows <code>image:<none></code> or <code>image:latest</code>.</p>
<p>That way, you can check if, that mentioned in "<a href="https://komodor.com/learn/how-to-fix-errimagepull-and-imagepullbackoff/" rel="nofollow noreferrer">How to fix ErrImagePull and ImagePullBackoff</a>" if this is linked to:</p>
<blockquote>
<ul>
<li><strong>Cause</strong>: Pod specification provides an invalid tag, or fails to provide a tag</li>
<li><strong>Resolution</strong>: Edit pod specification and provide the correct tag.<br />
If the image does not have a latest tag, you must provide a valid tag</li>
</ul>
</blockquote>
<p>And:</p>
<blockquote>
<p>What's the correct way of using the <code>:latest</code> tagged image</p>
</blockquote>
<p>Ideally, by <em>not</em> using it ;) <code>latest</code> can shift at any time, and by using a <em>fixed</em> label, you ensure a better reproducibility of your deployment.</p>
| VonC |
<p>I have configured the following ingress for traefik but traefik is sending the entire traffic to app-blue-release. Ideally it should send only 30% traffic to blue and 70% traffic to green, but it's not working as per expectation.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
traefik.frontend.entryPoints: http
traefik.ingress.kubernetes.io/service-weights: |-
app-green-release: 70.0
app-blue-release: 30.0
creationTimestamp: 2019-06-04T06:00:37Z
generation: 2
labels:
app: traefik-app
name: traefik-app
namespace: mynamespace
resourceVersion: "645536328"
selfLink: /apis/extensions/v1beta1/namespaces/mynamespace/ingresses/traefik-app
uid: 4637377-747b-11e9-92ea-005056aeabf7
spec:
rules:
- host: mycompany2.com
http:
paths:
- backend:
serviceName: app-release
servicePort: 8080
- host: mycompany.com
http:
paths:
- backend:
serviceName: app-ui-release
servicePort: 80
path: /widget
- backend:
serviceName: app-green-release
servicePort: 8080
path: /
- backend:
serviceName: app-blue-release
servicePort: 8080
path: /
status:
loadBalancer: {}
</code></pre>
<p>I am using following traffic version.
<em>traefik:v1.7.11-alpine</em></p>
<p>Earlier when the weight was configured with 10 (for blue) and 90(for green) then it was working fine. But once we changed to 30 and 70 respectively then this problem is happening.</p>
<p>Anyone has faced such issue before. Thanks for your help in advance</p>
| nagendra547 | <p>That seems to be followed by <a href="https://github.com/containous/traefik/issues/4494" rel="nofollow noreferrer">traefik issue 4494</a> (instead of your own <a href="https://github.com/containous/traefik/issues/4940" rel="nofollow noreferrer">issue 4940</a>)</p>
<blockquote>
<p>the annotation <code>ingress.kubernetes.io/service-weights</code> has been added in <a href="https://github.com/containous/traefik/blob/master/CHANGELOG.md#v170-2018-09-24" rel="nofollow noreferrer">v1.7</a>, before the annotation was ignored.</p>
</blockquote>
<p>However, <a href="https://github.com/containous/traefik/issues/4494#issuecomment-500892876" rel="nofollow noreferrer">as of June 11th, 2019</a>, Damien Duportal (Træfik's Developer Advocate) adds:</p>
<blockquote>
<p>There is no known workaround for now.<br>
We are working on this, but as the version 2.0 of Traefik is currently worked on, we have to wait :)</p>
</blockquote>
<hr>
<p>This comes from <a href="https://github.com/containous/traefik/pull/3112" rel="nofollow noreferrer">PR 3112</a></p>
<blockquote>
<p>Provides a new ingress annotation ingress.kubernetes.io/backend-weights which specifies a YAML-encoded, percentage-based weight distribution. With this annotation, we can do canary release by dynamically adjust the weight of ingress backends.</p>
</blockquote>
<p>(called initially <code>ingress.kubernetes.io/percentage-weights</code> before being renamed <code>ingress.kubernetes.io/service-weights</code> in <a href="https://github.com/containous/traefik/pull/3112/commits/11f6079d4814192db745c1175b9729bf64069de5" rel="nofollow noreferrer">commit 11f6079</a>)</p>
<p>The issue is still pending.<br>
Try first to upgrade to <a href="https://hub.docker.com/_/traefik" rel="nofollow noreferrer">v1.7.12-alpine</a> to check that the issue does persist.</p>
<p><a href="https://github.com/yue9944882/traefik/blob/11f6079d4814192db745c1175b9729bf64069de5/docs/configuration/backends/kubernetes.md#user-content-general-annotations" rel="nofollow noreferrer">The example</a> mentions:</p>
<pre><code>service_backend1: 1% # Note that the field names must match service names referenced in the Ingress object.
service_backend2: 33.33%
service_backend3: 33.33% # Same as 33.33%, the percentage sign is optional
</code></pre>
<p>So in your case, do try:</p>
<pre><code> app-green-release: 70%
app-blue-release: 30%
</code></pre>
| VonC |
<p>I'm looking to find a way to write a scaler for my application which is running on Minikube to scale it up and down based on Time Stamps. Any idea?</p>
| ahmokhtari | <p>That would be an <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">Horizontal Pod Autoscaler</a> (see its <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/" rel="nofollow noreferrer">Walkthrough here</a>), which would automatically scales the number of pods in a replication controller, deployment or replica set based on observed CPU utilization (or, with <strong><a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/custom-metrics-api.md" rel="nofollow noreferrer">custom metrics support</a></strong>, on some other application-provided metrics)</p>
<p>In your case, the custom metric would be the time.</p>
<p>You can then follow "<a href="https://itsmetommy.com/2018/07/01/kubernetes-horizontal-pod-autoscaler-using-minikube/" rel="nofollow noreferrer">Kubernetes: Horizontal Pod Autoscaler using Minikube</a>" from <a href="https://twitter.com/r1tommy" rel="nofollow noreferrer">Tommy Elmesewdy</a> as a practical example to implement one such autoscaler on Minikube.</p>
<p>In your case, you should ensure custom metrics are enabled:</p>
<pre><code>minikube start --extra-config kubelet.EnableCustomMetrics=true
</code></pre>
| VonC |
<p>I checked the pods in the kube-system namespace and noticed that some pods share the same ip address.The pods that share the same ip address appear to be on the same node.</p>
<p><a href="https://i.stack.imgur.com/OSiy0.png" rel="noreferrer"><img src="https://i.stack.imgur.com/OSiy0.png" alt="enter image description here"></a> </p>
<p>In the Kubernetes documenatation it said that "Evert pod gets its own ip address." (<a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/" rel="noreferrer">https://kubernetes.io/docs/concepts/cluster-administration/networking/</a>). I'm confused as to how same ip for some pods came about.</p>
| edmamerto | <p>This was reported in <a href="https://github.com/kubernetes/kubernetes/issues/51322" rel="noreferrer">issue 51322</a> and can depend on the <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/" rel="noreferrer">network plugin</a> you are using.</p>
<p>The issue was seen when using the basic <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#kubenet" rel="noreferrer">kubenet</a> network plugin on Linux.</p>
<p>Sometime, a <a href="https://github.com/kubernetes/kubernetes/issues/51322#issuecomment-388517154" rel="noreferrer">reset/reboot can help</a></p>
<blockquote>
<p>I suspect nodes have been configured with overlapped podCIDRs for such cases.<br>
The pod CIDR could be checked by <code>kubectl get node -o jsonpath='{.items[*].spec.podCIDR}'</code></p>
</blockquote>
| VonC |
<p>We are using <a href="https://github.com/uber/kraken" rel="nofollow noreferrer">Uber Kraken</a> to increase container download speeds in a kubernetes cluster, it works great.</p>
<p>However, we commonly mutate tags (upload a new version of <code>:latest</code>). In the <a href="https://github.com/uber/kraken#limitations" rel="nofollow noreferrer">limitations section of the Uber Kraken Github page</a> they state:</p>
<blockquote>
<p>Mutating tags (e.g. updating a latest tag) is allowed, however, a few things will not work: tag lookups immediately afterwards will still return the old value due to Nginx caching, and replication probably won't trigger. We are working on supporting this functionality better. If you need tag mutation support right now, please <strong>reduce the cache interval of the build-index component</strong>. If you also need replication in a multi-cluster setup, please consider setting up another Docker registry as Kraken's backend.</p>
</blockquote>
<p>What do they mean by <em>"reduce the cache interval of the build-index component"</em>? I don't quite understand what they are referring to in the docker universe.</p>
| David Parks | <p>This sentence comes from <a href="https://github.com/uber/kraken/pull/61/files" rel="nofollow noreferrer">PR 61</a>.</p>
<p>It offers a better alternative than the previous documentation, which stated:</p>
<blockquote>
<p>Mutating tags is allowed, however the behavior is undefined.<br />
A few things will go wrong:</p>
<ul>
<li>replication probably won't trigger, and</li>
<li>most tag lookups will probably still return the old tag due to caching.</li>
</ul>
<p>We are working supporting this functionality better.<br />
If you need mutation (e.g. updating a latest tag) right now, please consider implementing your own index component on top of a consistent key-value store.</p>
</blockquote>
<p>That last sentence was before:</p>
<blockquote>
<p>please consider setting up another docker registry as Kraken's backend,
and reduce cache interval of build-index component</p>
</blockquote>
<p>That would replace the current <a href="https://github.com/uber/kraken/blob/59fa7bcab998ca1abb7a2c268017db240cb0835e/docs/CONFIGURATION.md#configuring-storage-backend-for-origin-and-build-index" rel="nofollow noreferrer">Storage Backend For Origin And Build-Index</a>.</p>
| VonC |
<p>In, nomad, we have an env variable named NOMAD_ALLOC_INDEX, that gives me the index of the container, is there a similar env variable in Kubernetes for the pods to get the pod index?</p>
<p>Could you please provide your inputs?</p>
<p>Thanks,
Sarita</p>
| Sarita Singe | <p>Not really, unless you are using <a href="https://kubernetes.io/blog/2021/04/19/introducing-indexed-jobs/" rel="nofollow noreferrer">indexed jobs (Kubernetes 1.21, Apr. 2021)</a>.</p>
<p>For indexed jobs, the index is exposed to each Pod in the <code>batch.kubernetes.io/job-completion-index</code> annotation and the <code>JOB_COMPLETION_INDEX</code> environment variable.</p>
<p>Official documentation: "<strong><a href="https://kubernetes.io/docs/tasks/job/indexed-parallel-processing-static/" rel="nofollow noreferrer">Indexed Job for Parallel Processing with Static Work Assignment</a></strong>"</p>
<p>You can use the builtin <code>JOB_COMPLETION_INDEX</code> environment variable set by the Job controller for all containers.<br />
Optionally, you can <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/" rel="nofollow noreferrer">define your own environment variable through the downward API</a> to publish the index to containers</p>
<hr />
<p>There is also <a href="https://github.com/kubernetes/enhancements/pull/2630/files" rel="nofollow noreferrer"><code>kubernetes/enhancements</code> PR2630</a>, where the Pods hostname are set to <code>$(job-name)-$(index)</code>.</p>
<p>This is not yet integrated to Kubernetes, but could means you can derive the pod hostname from the job name and index, allowing you to get its IP. That means pods can address each other with a DNS lookup and communicate directly using Pod IPs.</p>
| VonC |
<p>Small question regarding Kubernetes and how one pod can talk to another pod (two pods in total) when they are in one same namespace, hopefully without a very complex solution.</p>
<p>Setup:
I have one pod A, called juliette-waiting-for-romeo-tocall, which exposes a rest endpoint <code>/romeo-please-call-me</code></p>
<p>Then, I have a second pod B, called romeo-wants-to-call-juliette, which codes does nothing but trying to make a call to juliette on the endpoint /romeo-please-call-me.</p>
<p>The pod juliette-waiting-for-romeo-tocall is not a complex pod exposed for the internet. This pod does not want any public internet call, does not want any other cluster to call, does not want any pods from different namespace to call. Juliette just want her Romeo which should be on the same namespace to call her.</p>
<p>Nonetheless, Romeo and Juliette are not yet engaged, they are not within one same pod, they don't use the side car pattern, does not share any volume, etc... They are really just two different pods under one same namespace so far.</p>
<p>What would be the easiest solution for Romeo to call Juliette please?</p>
<p>When I do <code>kubectl get all</code>, I do see:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
pod/juliette-waiting-for-romeo-tocall-7cc8b84cc4-xw9n7 1/1 Running 0 2d
pod/romeo-wants-to-call-juliette-7c765f6987-rsqp5 1/1 Running 0 1d
service/juliette-waiting-for-romeo-tocall ClusterIP 111.111.111.111 <none> 80/TCP 2d
service/romeo-wants-to-call-juliette ClusterIP 222.222.22.22 <none> 80/TCP 1d
</code></pre>
<p>So far, In Romeo's code, I tried a curl on the cluster IP address + port (in this fake example 111.111.111.111:80/romeo-please-call-me) But I am not getting anything back</p>
<p>What is the easier solution please?</p>
<p>Thank you</p>
| PatPanda | <p>More generally, pod-to-pod communication is documented by "<a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/" rel="nofollow noreferrer">Cluster Networking</a>"</p>
<blockquote>
<p>Every Pod gets its own IP address.</p>
<p>This means you do not need to explicitly create links between Pods and you almost never need to deal with mapping container ports to host ports.</p>
<p>This creates a clean, backwards-compatible model where Pods can be treated much like VMs or physical hosts from the perspectives of port allocation, naming, service discovery, load balancing, application configuration, and migration.</p>
<p>Kubernetes imposes the following fundamental requirements on any networking implementation (barring any intentional network segmentation policies):</p>
<ul>
<li>pods on a node can communicate with all pods on all nodes without NAT</li>
</ul>
</blockquote>
<ul>
<li>agents on a node (e.g. system daemons, kubelet) can communicate with all pods on that node</li>
</ul>
<blockquote>
<p>Kubernetes IP addresses exist at the Pod scope - containers within a Pod share their network namespaces - including their IP address and MAC address.</p>
<p>This means that containers within a Pod can all reach each other's ports on localhost.<br />
This also means that containers within a Pod must coordinate port usage, but this is no different from processes in a VM. This is called the "IP-per-pod" model.</p>
</blockquote>
<p>But that depends on which <a href="https://github.com/containernetworking/cni" rel="nofollow noreferrer">CNI (Container Network Interface)</a> has been used for your Kubernetes.</p>
<p>The <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/network/networking.md#pod-to-pod" rel="nofollow noreferrer">Kubernetes Networking design document</a> mentions for pod-to-pod:</p>
<blockquote>
<p>Because every pod gets a "real" (not machine-private) IP address, pods can communicate without proxies or translations.<br />
The pod can use well-known port numbers and can avoid the use of higher-level service discovery systems like DNS-SD, Consul, or Etcd.</p>
<p>When any container calls ioctl(SIOCGIFADDR) (get the address of an interface), it sees the same IP that any peer container would see them coming from — each pod has its own IP address that other pods can know.</p>
<p>By making IP addresses and ports the same both inside and outside the pods, we create a NAT-less, flat address space.<br />
Running "<code>ip addr show</code>" should work as expected.<br />
This would enable all existing naming/discovery mechanisms to work out of the box, including self-registration mechanisms and applications that distribute IP addresses. We should be optimizing for inter-pod network communication. Within a pod, containers are more likely to use communication through volumes (e.g., tmpfs) or IPC.</p>
</blockquote>
<hr />
<p>You can follow the <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/" rel="nofollow noreferrer">Debug Services</a> to troubleshoot this communication issue:</p>
<ul>
<li><p>Does the Service exist?</p>
<pre><code>kubectl get svc hostnames
</code></pre>
</li>
<li><p>Does the Service work by DNS name?</p>
<pre><code>nslookup hostnames
</code></pre>
</li>
<li><p>Does any Service work by DNS name?</p>
<pre><code>nslookup kubernetes.default
</code></pre>
</li>
<li><p>Does the Service work by IP?<br />
Assuming you have confirmed that DNS works, the next thing to test is whether your Service works by its IP address.<br />
From a Pod in your cluster, access the Service's IP (from kubectl get above).</p>
<pre><code>for i in $(seq 1 3); do
wget -qO- 10.0.1.175:80
done
</code></pre>
</li>
<li><p>Is the Service defined correctly?</p>
<pre><code>kubectl get service hostnames -o json
</code></pre>
</li>
<li><p>Does the Service have any Endpoints?</p>
<pre><code>kubectl get pods -l app=hostnames
</code></pre>
</li>
<li><p>Are the Pods working?<br />
From within a Pod:</p>
<pre><code>for ep in 10.244.0.5:9376 10.244.0.6:9376 10.244.0.7:9376; do
wget -qO- $ep
done
</code></pre>
</li>
<li><p>Is the kube-proxy working?</p>
<pre><code>ps auxw | grep kube-proxy
iptables-save | grep hostnames
</code></pre>
</li>
<li><p>Is kube-proxy proxying?</p>
<pre><code>curl 10.0.1.175:80
</code></pre>
</li>
<li><p>Edge case: A Pod fails to reach itself via the Service IP</p>
</li>
</ul>
<blockquote>
<p>This might sound unlikely, but it does happen and it is supposed to work.</p>
<p>This can happen when the network is not properly configured for "hairpin" traffic, usually when kube-proxy is running in iptables mode and Pods are connected with bridge network.<br />
The Kubelet exposes a hairpin-mode flag that allows endpoints of a Service to loadbalance back to themselves if they try to access their own Service VIP.<br />
The hairpin-mode flag must either be set to <code>hairpin-veth</code> or <code>promiscuous-bridge</code>.</p>
</blockquote>
| VonC |
<p>I'm trying to figure out, what is import/export best practices in K8S keycloak(version 3.3.0.CR1). Here is keycloak official page <a href="http://www.keycloak.org/docs/2.0/server_admin_guide/topics/export-import.html" rel="nofollow noreferrer">import/export</a> explanation, and they example of export to single file json. Going to /keycloak/bin folder and the run this:</p>
<pre><code>./standalone.sh -Dkeycloak.migration.action=export -Dkeycloak.migration.provider=singleFile -Dkeycloak.migration.file=keycloak-export.json
</code></pre>
<p>I logged in to pod, and I get errors after run this command:</p>
<pre><code>12:23:32,045 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("add") failed - address: ([
("core-service" => "management"),
("management-interface" => "http-interface")
]) - failure description: {
"WFLYCTL0080: Failed services" => {"org.wildfly.management.http.extensible" => "java.net.BindException: Address already in use /127.0.0.1:9990"},
"WFLYCTL0288: One or more services were unable to start due to one or more indirect dependencies not being available." => {
"Services that were unable to start:" => ["org.wildfly.management.http.extensible.shutdown"],
"Services that may be the cause:" => ["jboss.remoting.remotingConnectorInfoService.http-remoting-connector"]
}
}
</code></pre>
<p>As I see, Keycloak server run on the same port, where I ran backup script. Here helm/keycloak values.yml:</p>
<pre><code>Service:
Name: keycloak
Port: 8080
Type: ClusterIP
Deployment:
Image: jboss/keycloak
ImageTag: 2.5.1.Final
ImagePullPolicy: IfNotPresent
ContainerPort: 8080
KeycloakUser: Admin
KeycloakPassword: Admin
</code></pre>
<p>So, server should be stopped, before we ran this scripts? I can't stop keycloak process inside of pod, because ingress will close pod and will create new one.
Any suggestions for any other way to export/import(backup/restore) data? Or I missing something?</p>
<p>P.S.
I even tried UI import/export. Export work good, and I see all data. But import worked in half way. He Brought me all "Clients", but not my "Realm" and "User Federation".</p>
| muzafarow | <p>Basically, you just have to start the exporting Keycloak instance on ports that are different from your main instance. I used something like this just now:</p>
<p><code>bin/standalone.sh -Dkeycloak.migration.action=export -Dkeycloak.migration.provider=singleFile -Dkeycloak.migration.file=keycloak-export.json -Djboss.http.port=8888 -Djboss.https.port=9999 -Djboss.management.http.port=7777</code></p>
<p>The important part are all the ports. If you get more error messages, you might need to add more properties (<code>grep port standalone/configuration/standalone.xml</code> is your friend for finding out property names), but in the end, all error messages stop and you see this message instead:</p>
<p><code>
09:15:26,550 INFO [org.keycloak.exportimport.singlefile.SingleFileExportProvider] (ServerService Thread Pool -- 52) Exporting model into file /opt/jboss/keycloak/keycloak-export.json
[...]
09:15:29,565 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0025: Keycloak 3.2.0.Final (WildFly Core 2.0.10.Final) started in 12156ms - Started 444 of 818 services (558 services are lazy, passive or on-demand)
</code></p>
<p>Now you can stop the server with <kbd>Ctrl</kbd>-<kbd>C</kbd>, exit the container and copy the export file away with <code>kubectl cp</code>.</p>
| Nikolai Prokoschenko |
<h3>TL;DR</h3>
<p>java.net.SocketException: Network is unreachable (connect failed) in simple Kubernetes pod that can curl to the internet.</p>
<h3>Short Description</h3>
<p>I've setup a simple Job object in Kubernetes, which spawns a simple pod to use the Slack-Api to poll for a conversation history via slack.</p>
<p>Running this application locally or dockerized works like a charm. But when trying to execute it in Kubernetes I get a</p>
<pre><code>java.net.SocketException: Network is unreachable (connect failed)
at java.base/java.net.PlainSocketImpl.socketConnect(Native Method)
at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:399)
at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:242)
at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:224)
at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.base/java.net.Socket.connect(Socket.java:609)
at okhttp3.internal.platform.Platform.connectSocket(Platform.kt:120)
</code></pre>
<h3>Kubernetes config</h3>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: my-app-job
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: name
image: myimage
imagePullPolicy: Always
ports:
- containerPort: 443
protocol: TCP
- containerPort: 80
protocol: TCP
env:
- name: http_proxy
value: myproxy
- name: https_proxy
value: myproxy
- name: no_proxy
value: myproxy
restartPolicy: OnFailure
</code></pre>
<p>Trying to debug what is happening, I noticed that I could curl something from my pod (so there's access to the public internet) but when I try to ping, i get Socket: Operation not permitted.</p>
<p>Eg:</p>
<pre><code>bash-4.2$ ping www.google.com
ping: socket: Operation not permitted
bash-4.2$ curl -I www.google.com
HTTP/1.1 200 OK
content-type: text/html; charset=ISO-8859-1
p3p: CP="This is not a P3P policy! See g.co/p3phelp for more info."
date: Wed, 29 Sep 2021 09:18:36 GMT
server: gws
x-xss-protection: 0
x-frame-options: SAMEORIGIN
expires: Wed, 29 Sep 2021 09:18:36 GMT
cache-control: private
set-cookie: XXXXXXXXXXX
expires=Thu, 31-Mar-2022 09:18:36 GMT; path=/; domain=.google.com; HttpOnly
x-cache: MISS from XXXXXXXXXXX
x-cache-lookup: MISS fXXXXXXXXX
bash-4.2$ command terminated with exit code 137
</code></pre>
<p>I believe that I'm missing some configuration. I tried opening a port with a <code>NodePort</code> service but I had no success. Any ideas how to debug this?</p>
| Dimitrios | <p>Java does not inherit the proxy settings from the environment.</p>
<p>You need to specify the proxy using Java system properties</p>
<pre><code>-Dhttp.proxyHost=locahost -Dhttp.proxyPort=9900
</code></pre>
| shyam |