Revisions of kubernetes-salt

Containers Team's avatar Containers Team (containersteam) committed (revision 257)
new commit from concourse: Commit 22a3b23 by Florian Bergmann fbergmann@suse.de
 Install system wide certificates from pillars.
 
 `cert`-state will install the certificates as trust anchors.
Containers Team's avatar Containers Team (containersteam) committed (revision 256)
new commit from concourse: Commit 6c4ec0c by Maximilian Meister mmeister@suse.de
 skip removed etcd servers (bsc#1093305)
 
 Signed-off-by: Maximilian Meister <mmeister@suse.de>
Containers Team's avatar Containers Team (containersteam) committed (revision 255)
new commit from concourse: Commit 03d371f by Rafael Fernández López ereslibre@ereslibre.es
 Remove default grace period and timeout when draining a node.
 
 By default, the grace period is -1, or whatever the pod specifies on its
 `terminationGracePeriodSeconds` spec. The pod can know better than us what it
 needs to cleanly stop, and we don't need to apply arbitrary timeouts. If this
 is not specified, the default `terminationGracePeriodSeconds` value is 30
 seconds. After this grace termination period, a SIGKILL will be sent to the
 process when evicting pods.
 
 Aside from this, we should have an "inifinite" timeout. Given that this
 process doesn't stall, it's safer to perform this operation until it
 succeeds. If we have proof that this is causing problems we should add a
 timeout, but in general the draining process should not hang.
 
 The alternative is in reality the real problem: if we timeout the draining
 process, it can happen that certain pods with remote volumes (nfs, rbd...)
 are never evicted, and when we go to restart the machine it hangs, because
 systemd fails to kill the processes when there are active mounts.
 
 Since there are no sensible defaults for the grace period and for the global
 timeout is better to let the first one to the pod definition, and the second
 one to just "infinite" until we really hit an issue because of this.
 
 Fixes: bsc#1085980
Jordi Massaguer's avatar Jordi Massaguer (jordimassaguerpla) committed (revision 254)
fix changelog
Containers Team's avatar Containers Team (containersteam) committed (revision 253)
new commit from concourse: Commit 876f7c7 by Rafael Fernández López ereslibre@ereslibre.es
 Lower the per-request timeout when we are checking for successful query
 
 When we are waiting for some service to be up, if the request hangs for some
 reason, we want to retry at least several times. Without setting this value
 explicitly, it takes the default (`http_request_timeout` as 3600), what is
 way over our `wait_for` argument set at 300 seconds.
 
 By setting the default `http_request_timeout` to a more reasonable default
 when doing this kind of checks we can ensure that the request itself will
 timeout several times before we call it done.
 
 Fixes: bsc#1093540 Fixes: bsc#1093685
Containers Team's avatar Containers Team (containersteam) committed (revision 252)
new commit from concourse: Commit b13d89a by Rafael Fernández López ereslibre@ereslibre.es
 Only remove the master grains if there are any masters to be updated.
 
 The `salt.function` call will be marked as failed if there were no minions to
 target. Make sure that we only run this step if we know that we'll have some
 targets available.
 
 Fixes: bsc#1093491
Containers Team's avatar Containers Team (containersteam) committed (revision 251)
new commit from concourse: Commit c93d25d by Alvaro Saurin alvaro.saurin@gmail.com
 Queue the /etc/hosts update when triggered from a reactor.
 
 Fixes part of bsc#1093123
Containers Team's avatar Containers Team (containersteam) committed (revision 250)
new commit from concourse: Commit bc4b7ae by Alvaro Saurin alvaro.saurin@gmail.com
 Updated diagrams
 
 feature#docs
Containers Team's avatar Containers Team (containersteam) committed (revision 249)
new commit from concourse: Commit 442a76c by Rafael Fernández López ereslibre@ereslibre.es
 Make HAProxy work as an http proxy instead of a tcp proxy.
 
 This allows us to add fine-grained timeouts depending on the endpoint being
 accessed or with what parameters (e.g. /log?follow=true should have no
 timeout as happens on the apiserver). /exec is another example, but in this
 case the protocol is upgraded to spdy.
 
 Fixes: bsc#1071994
Containers Team's avatar Containers Team (containersteam) committed (revision 248)
new commit from concourse: Commit 4b37cb9 by Maximilian Meister mmeister@suse.de
 fix eviction-hard path
 
 feature#compute-resources
 
 Signed-off-by: Maximilian Meister <mmeister@suse.de>
Containers Team's avatar Containers Team (containersteam) committed (revision 247)
new commit from concourse: Commit 177f774 by Kiall Mac Innes kiall@macinnes.ie
 Add JUnit output
 
 Commit 28e522e by Kiall Mac Innes kiall@macinnes.ie
 Update README with style check steps
 
 Commit 248c228 by Kiall Mac Innes kiall@macinnes.ie
 Fixup python code style issues
 
 Commit 4712a69 by Kiall Mac Innes kiall@macinnes.ie
 Add flake8 job
Containers Team's avatar Containers Team (containersteam) committed (revision 246)
new commit from concourse: Commit 6de5432 by Kiall Mac Innes kiall@macinnes.ie
 Add Housekeeping Job
buildservice-autocommit accepted request 606451 from Containers Team's avatar Containers Team (containersteam) (revision 245)
baserev update by copy to link target
Containers Team's avatar Containers Team (containersteam) committed (revision 244)
new commit from concourse: Commit 1657de5 by Flavio Castelli fcastelli@suse.com
 Add missing cri-o removal states
 
 This is required to fix node removal on clusters using CRI-O as CRI.
 
 Fixes bsc#1092614
 
 Signed-off-by: Flavio Castelli <fcastelli@suse.com>
buildservice-autocommit accepted request 606230 from Containers Team's avatar Containers Team (containersteam) (revision 243)
baserev update by copy to link target
buildservice-autocommit accepted request 605706 from Containers Team's avatar Containers Team (containersteam) (revision 242)
baserev update by copy to link target
Containers Team's avatar Containers Team (containersteam) committed (revision 241)
new commit from concourse: Commit e286f9b by Flavio Castelli fcastelli@suse.com
 Make crictl handling more robust
 
 Some of our states are now depending on `crictl` tool. All these states have
 to depend on the `kubelet service.running` one, otherwise the
 `crictl` socket won't be available and the state will fail.
 
 Also, with these changes, the "blame" of a failure should point directly to
 the guilty (`kubelet` service not running for whatever reason) instead of
 falling on the `haproxy` one.
 
 Finally, the check looking for `crictl` socket has been changed to ensure the
 socket file exists and the service is actually listening.
 
 This will help with bugs like bsc#1091419
 
 Signed-off-by: Flavio Castelli <fcastelli@suse.com>
Containers Team's avatar Containers Team (containersteam) committed (revision 240)
new commit from concourse: Commit bcf5415 by Flavio Castelli fcastelli@suse.com
 kubelet: allow resource reservation
 
 Allow kubelet to take into account resource reservation and eviction
 threshold.
 
 == Resource reservation ==
 
 It's possible to reserve resources for the `kube` and the `system`
 components.
 
 The `kube` component is the one including the kubernetes components: api
 server, controller manager, scheduler, proxy, kubelet and the container
 engine components (docker, containerd, cri-o, runc).
 
 The `system` component is the `system.slice`, basically all the system
 services: sshd, cron, logrotate,...
 
 By default don't specify any kind of resource reservation. Note well: when
 the resource reservations are in place kubelet will reduce the amount or
 resources allocatable by the node. However **no** enforcement will be done
 neither on the `kube.slice` nor on the `system.slice`.
 
 This is not happening because:
 
 _service kubernetes-salt.changes kubernetes-salt.spec master.tar.gz Resource enforcement is done using cgroups.
 _service kubernetes-salt.changes kubernetes-salt.spec master.tar.gz The slices are created by systemd.
 _service kubernetes-salt.changes kubernetes-salt.spec master.tar.gz systemd doesn't manage all the available cgroups yet.
 _service kubernetes-salt.changes kubernetes-salt.spec master.tar.gz kubelet tries to manage cgroups that are not handled by systemd,
 resulting in the kubelet failing at startup.
 _service kubernetes-salt.changes kubernetes-salt.spec master.tar.gz Changing the cgroup driver to `systemd` doesn't fix the issue.
 
 Moreover enforcing limits on the `system` and the `kube` slices can lead to
 resource starvation of core components of the system. As advised even by the
 official kubernetes docs, this is something that only expert users should do
 only after extensive profiling of their nodes.
 
 Finally, even if we wanted to enforce the limits, the right place would be
 systemd (by tuning the slice settings).
 
 For more information see the official documentation:
 https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/
 
 == Eviction threshold ==
 
 By default no eviction threshold is set.
 
 bsc#1086185
 
 Signed-off-by: Flavio Castelli <fcastelli@suse.com>
Containers Team's avatar Containers Team (containersteam) committed (revision 239)
new commit from concourse: Commit 964deee by Maximilian Meister mmeister@suse.de
 add condition to KUBE_ADMISSION_CONTROL
 
 bsc#1092140
 
 Signed-off-by: Maximilian Meister <mmeister@suse.de>
 
 Commit eaab500 by Maximilian Meister mmeister@suse.de
 fix conflicting sls id's
 
 they need to be globally unique
 
 orch error happened when setting psp to false in params.sls
 
 partially fixes https://bugzilla.suse.com/show_bug.cgi?id=1092140
 
 bsc#1092140
 
 Signed-off-by: Maximilian Meister <mmeister@suse.de>
buildservice-autocommit accepted request 605055 from Containers Team's avatar Containers Team (containersteam) (revision 238)
baserev update by copy to link target
Displaying revisions 121 - 140 of 377
openSUSE Build Service is sponsored by