File kubeshark-cli-52.8.1.obscpio of Package kubeshark-cli

07070100000000000081A4000000000000000000000001689B9CB3000000DC000000000000000000000000000000000000002500000000kubeshark-cli-52.8.1/.goreleaser.ymlbrews:
  - name: kubeshark
    homepage: https://github.com/kubeshark/kubeshark
    tap:
      owner: kubeshark
      name: homebrew-kubeshark
    commit_author:
      name: mertyildiran
      email: me@mertyildiran.com
07070100000001000081A4000000000000000000000001689B9CB300000D00000000000000000000000000000000000000002800000000kubeshark-cli-52.8.1/CODE_OF_CONDUCT.md# Contributor Covenant Code of Conduct

## Our Pledge

In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to making participation in our project and
our community a harassment-free experience for everyone, regardless of age, body
size, disability, ethnicity, sex characteristics, gender identity and expression,
level of experience, education, socio-economic status, nationality, personal
appearance, race, religion, or sexual identity and orientation.

## Our Standards

Examples of behavior that contributes to creating a positive environment
include:

* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members

Examples of unacceptable behavior by participants include:

* The use of sexualized language or imagery and unwelcome sexual attention or
 advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic
 address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a
 professional setting

## Our Responsibilities

Project maintainers are responsible for clarifying the standards of acceptable
behavior and are expected to take appropriate and fair corrective action in
response to any instances of unacceptable behavior.

Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.

## Scope

This Code of Conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community. Examples of
representing a project or community include using an official project e-mail
address, posting via an official social media account, or acting as an appointed
representative at an online or offline event. Representation of a project may be
further defined and clarified by project maintainers.

## Enforcement

Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting the project team. All
complaints will be reviewed and investigated and will result in a response that
is deemed necessary and appropriate to the circumstances. The project team is
obligated to maintain confidentiality with regard to the reporter of an incident.
Further details of specific enforcement policies may be posted separately.

Project maintainers who do not follow or enforce the Code of Conduct in good
faith may face temporary or permanent repercussions as determined by other
members of the project's leadership.

## Attribution

This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html

[homepage]: https://www.contributor-covenant.org

For answers to common questions about this code of conduct, see
https://www.contributor-covenant.org/faq
07070100000002000081A4000000000000000000000001689B9CB3000005C0000000000000000000000000000000000000002500000000kubeshark-cli-52.8.1/CONTRIBUTING.md![Kubeshark: The API Traffic Analyzer for Kubernetes](https://raw.githubusercontent.com/kubeshark/assets/master/svg/kubeshark-logo.svg)

# Contributing to Kubeshark

We welcome code contributions from the community.
Please read and follow the guidelines below.

## Communication

* Before starting work on a major feature, please reach out to us via [GitHub](https://github.com/kubeshark/kubeshark), [Discord](https://discord.gg/WkvRGMUcx7), [Slack](https://join.slack.com/t/kubeshark/shared_invite/zt-1k3sybpq9-uAhFkuPJiJftKniqrGHGhg), [email](mailto:info@kubeshark.co), etc. We will make sure no one else is already working on it. A _major feature_ is defined as any change that is > 100 LOC altered (not including tests), or changes any user-facing behavior
* Small patches and bug fixes don't need prior communication.

## Contribution Requirements

* Code style - most of the code is written in Go, please follow [these guidelines](https://golang.org/doc/effective_go)
* Go-tools compatible (`go get`, `go test`, etc.)
* Code coverage for unit tests must not decrease.
* Code must be usefully commented. Not only for developers on the project, but also for external users of these packages
* When reviewing PRs, you are encouraged to use Golang's [code review comments page](https://github.com/golang/go/wiki/CodeReviewComments)
* Project follows [Google JSON Style Guide](https://google.github.io/styleguide/jsoncstyleguide.xml) for the REST APIs that are provided.
07070100000003000081A4000000000000000000000001689B9CB300002A05000000000000000000000000000000000000001D00000000kubeshark-cli-52.8.1/LICENSE
                                 Apache License
                           Version 2.0, January 2004
                        https://www.apache.org/licenses/

   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

   1. Definitions.

      "License" shall mean the terms and conditions for use, reproduction,
      and distribution as defined by Sections 1 through 9 of this document.

      "Licensor" shall mean the copyright owner or entity authorized by
      the copyright owner that is granting the License.

      "Legal Entity" shall mean the union of the acting entity and all
      other entities that control, are controlled by, or are under common
      control with that entity. For the purposes of this definition,
      "control" means (i) the power, direct or indirect, to cause the
      direction or management of such entity, whether by contract or
      otherwise, or (ii) ownership of fifty percent (50%) or more of the
      outstanding shares, or (iii) beneficial ownership of such entity.

      "You" (or "Your") shall mean an individual or Legal Entity
      exercising permissions granted by this License.

      "Source" form shall mean the preferred form for making modifications,
      including but not limited to software source code, documentation
      source, and configuration files.

      "Object" form shall mean any form resulting from mechanical
      transformation or translation of a Source form, including but
      not limited to compiled object code, generated documentation,
      and conversions to other media types.

      "Work" shall mean the work of authorship, whether in Source or
      Object form, made available under the License, as indicated by a
      copyright notice that is included in or attached to the work
      (an example is provided in the Appendix below).

      "Derivative Works" shall mean any work, whether in Source or Object
      form, that is based on (or derived from) the Work and for which the
      editorial revisions, annotations, elaborations, or other modifications
      represent, as a whole, an original work of authorship. For the purposes
      of this License, Derivative Works shall not include works that remain
      separable from, or merely link (or bind by name) to the interfaces of,
      the Work and Derivative Works thereof.

      "Contribution" shall mean any work of authorship, including
      the original version of the Work and any modifications or additions
      to that Work or Derivative Works thereof, that is intentionally
      submitted to Licensor for inclusion in the Work by the copyright owner
      or by an individual or Legal Entity authorized to submit on behalf of
      the copyright owner. For the purposes of this definition, "submitted"
      means any form of electronic, verbal, or written communication sent
      to the Licensor or its representatives, including but not limited to
      communication on electronic mailing lists, source code control systems,
      and issue tracking systems that are managed by, or on behalf of, the
      Licensor for the purpose of discussing and improving the Work, but
      excluding communication that is conspicuously marked or otherwise
      designated in writing by the copyright owner as "Not a Contribution."

      "Contributor" shall mean Licensor and any individual or Legal Entity
      on behalf of whom a Contribution has been received by Licensor and
      subsequently incorporated within the Work.

   2. Grant of Copyright License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      copyright license to reproduce, prepare Derivative Works of,
      publicly display, publicly perform, sublicense, and distribute the
      Work and such Derivative Works in Source or Object form.

   3. Grant of Patent License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      (except as stated in this section) patent license to make, have made,
      use, offer to sell, sell, import, and otherwise transfer the Work,
      where such license applies only to those patent claims licensable
      by such Contributor that are necessarily infringed by their
      Contribution(s) alone or by combination of their Contribution(s)
      with the Work to which such Contribution(s) was submitted. If You
      institute patent litigation against any entity (including a
      cross-claim or counterclaim in a lawsuit) alleging that the Work
      or a Contribution incorporated within the Work constitutes direct
      or contributory patent infringement, then any patent licenses
      granted to You under this License for that Work shall terminate
      as of the date such litigation is filed.

   4. Redistribution. You may reproduce and distribute copies of the
      Work or Derivative Works thereof in any medium, with or without
      modifications, and in Source or Object form, provided that You
      meet the following conditions:

      (a) You must give any other recipients of the Work or
          Derivative Works a copy of this License; and

      (b) You must cause any modified files to carry prominent notices
          stating that You changed the files; and

      (c) You must retain, in the Source form of any Derivative Works
          that You distribute, all copyright, patent, trademark, and
          attribution notices from the Source form of the Work,
          excluding those notices that do not pertain to any part of
          the Derivative Works; and

      (d) If the Work includes a "NOTICE" text file as part of its
          distribution, then any Derivative Works that You distribute must
          include a readable copy of the attribution notices contained
          within such NOTICE file, excluding those notices that do not
          pertain to any part of the Derivative Works, in at least one
          of the following places: within a NOTICE text file distributed
          as part of the Derivative Works; within the Source form or
          documentation, if provided along with the Derivative Works; or,
          within a display generated by the Derivative Works, if and
          wherever such third-party notices normally appear. The contents
          of the NOTICE file are for informational purposes only and
          do not modify the License. You may add Your own attribution
          notices within Derivative Works that You distribute, alongside
          or as an addendum to the NOTICE text from the Work, provided
          that such additional attribution notices cannot be construed
          as modifying the License.

      You may add Your own copyright statement to Your modifications and
      may provide additional or different license terms and conditions
      for use, reproduction, or distribution of Your modifications, or
      for any such Derivative Works as a whole, provided Your use,
      reproduction, and distribution of the Work otherwise complies with
      the conditions stated in this License.

   5. Submission of Contributions. Unless You explicitly state otherwise,
      any Contribution intentionally submitted for inclusion in the Work
      by You to the Licensor shall be under the terms and conditions of
      this License, without any additional terms or conditions.
      Notwithstanding the above, nothing herein shall supersede or modify
      the terms of any separate license agreement you may have executed
      with Licensor regarding such Contributions.

   6. Trademarks. This License does not grant permission to use the trade
      names, trademarks, service marks, or product names of the Licensor,
      except as required for reasonable and customary use in describing the
      origin of the Work and reproducing the content of the NOTICE file.

   7. Disclaimer of Warranty. Unless required by applicable law or
      agreed to in writing, Licensor provides the Work (and each
      Contributor provides its Contributions) on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
      implied, including, without limitation, any warranties or conditions
      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
      PARTICULAR PURPOSE. You are solely responsible for determining the
      appropriateness of using or redistributing the Work and assume any
      risks associated with Your exercise of permissions under this License.

   8. Limitation of Liability. In no event and under no legal theory,
      whether in tort (including negligence), contract, or otherwise,
      unless required by applicable law (such as deliberate and grossly
      negligent acts) or agreed to in writing, shall any Contributor be
      liable to You for damages, including any direct, indirect, special,
      incidental, or consequential damages of any character arising as a
      result of this License or out of the use or inability to use the
      Work (including but not limited to damages for loss of goodwill,
      work stoppage, computer failure or malfunction, or any and all
      other commercial damages or losses), even if such Contributor
      has been advised of the possibility of such damages.

   9. Accepting Warranty or Additional Liability. While redistributing
      the Work or Derivative Works thereof, You may choose to offer,
      and charge a fee for, acceptance of support, warranty, indemnity,
      or other liability obligations and/or rights consistent with this
      License. However, in accepting such obligations, You may act only
      on Your own behalf and on Your sole responsibility, not on behalf
      of any other Contributor, and only if You agree to indemnify,
      defend, and hold each Contributor harmless for any liability
      incurred by, or claims asserted against, such Contributor by reason
      of your accepting any such warranty or additional liability.

   END OF TERMS AND CONDITIONS

   Copyright 2022 Kubeshark

   Licensed under the Apache License, Version 2.0 (the "License");
   you may not use this file except in compliance with the License.
   You may obtain a copy of the License at

       https://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.
07070100000004000081A4000000000000000000000001689B9CB3000020B4000000000000000000000000000000000000001E00000000kubeshark-cli-52.8.1/MakefileSHELL=/bin/bash

.PHONY: help
.DEFAULT_GOAL := build
.ONESHELL:

SUFFIX=$(GOOS)_$(GOARCH)
COMMIT_HASH=$(shell git rev-parse HEAD)
GIT_BRANCH=$(shell git branch --show-current | tr '[:upper:]' '[:lower:]')
GIT_VERSION=$(shell git branch --show-current | tr '[:upper:]' '[:lower:]')
BUILD_TIMESTAMP=$(shell date +%s)
export VER?=0.0.0

help: ## Print this help message.
	@awk 'BEGIN {FS = ":.*?## "} /^[a-zA-Z_-]+:.*?## / {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}' $(MAKEFILE_LIST)

build-debug:  ## Build for debugging.
	export CGO_ENABLED=1
	export GCLFAGS='-gcflags="all=-N -l"'
	${MAKE} build-base

build: ## Build.
	export CGO_ENABLED=0
	export LDFLAGS_EXT='-extldflags=-static -s -w'
	${MAKE} build-base

build-race: ## Build with -race flag.
	export CGO_ENABLED=1
	export GCLFAGS='-race'
	export LDFLAGS_EXT='-extldflags=-static -s -w'
	${MAKE} build-base

build-base: ## Build binary (select the platform via GOOS / GOARCH env variables).
	go build ${GCLFAGS} -ldflags="${LDFLAGS_EXT} \
					-X 'github.com/kubeshark/kubeshark/misc.GitCommitHash=$(COMMIT_HASH)' \
					-X 'github.com/kubeshark/kubeshark/misc.Branch=$(GIT_BRANCH)' \
					-X 'github.com/kubeshark/kubeshark/misc.BuildTimestamp=$(BUILD_TIMESTAMP)' \
					-X 'github.com/kubeshark/kubeshark/misc.Platform=$(SUFFIX)' \
					-X 'github.com/kubeshark/kubeshark/misc.Ver=$(VER)'" \
					-o bin/kubeshark_$(SUFFIX) kubeshark.go && \
	cd bin && shasum -a 256 kubeshark_${SUFFIX} > kubeshark_${SUFFIX}.sha256

build-brew: ## Build binary for brew/core CI
	go build ${GCLFAGS} -ldflags="${LDFLAGS_EXT} \
					-X 'github.com/kubeshark/kubeshark/misc.GitCommitHash=$(COMMIT_HASH)' \
					-X 'github.com/kubeshark/kubeshark/misc.Branch=$(GIT_BRANCH)' \
					-X 'github.com/kubeshark/kubeshark/misc.BuildTimestamp=$(BUILD_TIMESTAMP)' \
					-X 'github.com/kubeshark/kubeshark/misc.Platform=$(SUFFIX)' \
					-X 'github.com/kubeshark/kubeshark/misc.Ver=$(VER)'" \
					-o kubeshark kubeshark.go

build-windows-amd64:
	$(MAKE) build GOOS=windows GOARCH=amd64 && \
	mv ./bin/kubeshark_windows_amd64 ./bin/kubeshark.exe && \
	rm bin/kubeshark_windows_amd64.sha256 && \
	cd bin && shasum -a 256 kubeshark.exe > kubeshark.exe.sha256

build-all: ## Build for all supported platforms.
	export CGO_ENABLED=0
	echo "Compiling for every OS and Platform" && \
	mkdir -p bin && sed s/_VER_/$(VER)/g RELEASE.md.TEMPLATE >  bin/README.md && \
	$(MAKE) build GOOS=linux GOARCH=amd64 && \
	$(MAKE) build GOOS=linux GOARCH=arm64 && \
	$(MAKE) build GOOS=darwin GOARCH=amd64 && \
	$(MAKE) build GOOS=darwin GOARCH=arm64 && \
	$(MAKE) build-windows-amd64 && \
	echo "---------" && \
	find ./bin -ls

clean: ## Clean all build artifacts.
	go clean
	rm -rf ./bin/*

test: ## Run cli tests.
	@go test ./... -coverpkg=./... -race -coverprofile=coverage.out -covermode=atomic

lint: ## Lint the source code.
	golangci-lint run

kubectl-view-all-resources: ## This command outputs all Kubernetes resources using YAML format and pipes it to VS Code
	./kubectl.sh view-all-resources

kubectl-view-kubeshark-resources: ## This command outputs all Kubernetes resources in "kubeshark" namespace using YAML format and pipes it to VS Code
	./kubectl.sh view-kubeshark-resources

generate-helm-values: ## Generate the Helm values from config.yaml
	mv ~/.kubeshark/config.yaml ~/.kubeshark/config.yaml.old; bin/kubeshark__ config>helm-chart/values.yaml;mv ~/.kubeshark/config.yaml.old ~/.kubeshark/config.yaml
	sed -i 's/^license:.*/license: ""/' helm-chart/values.yaml && sed -i '1i # find a detailed description here: https://github.com/kubeshark/kubeshark/blob/master/helm-chart/README.md' helm-chart/values.yaml 

generate-manifests: ## Generate the manifests from the Helm chart using default configuration
	helm template kubeshark -n default ./helm-chart > ./manifests/complete.yaml

logs-sniffer:
	export LOGS_POD_PREFIX=kubeshark-worker-
	export LOGS_CONTAINER='-c sniffer'
	export LOGS_FOLLOW=
	${MAKE} logs

logs-sniffer-follow:
	export LOGS_POD_PREFIX=kubeshark-worker-
	export LOGS_CONTAINER='-c sniffer'
	export LOGS_FOLLOW=--follow
	${MAKE} logs

logs-tracer:
	export LOGS_POD_PREFIX=kubeshark-worker-
	export LOGS_CONTAINER='-c tracer'
	export LOGS_FOLLOW=
	${MAKE} logs

logs-tracer-follow:
	export LOGS_POD_PREFIX=kubeshark-worker-
	export LOGS_CONTAINER='-c tracer'
	export LOGS_FOLLOW=--follow
	${MAKE} logs

logs-worker: logs-sniffer

logs-worker-follow: logs-sniffer-follow

logs-hub:
	export LOGS_POD_PREFIX=kubeshark-hub
	export LOGS_FOLLOW=
	${MAKE} logs

logs-hub-follow:
	export LOGS_POD_PREFIX=kubeshark-hub
	export LOGS_FOLLOW=--follow
	${MAKE} logs

logs-front:
	export LOGS_POD_PREFIX=kubeshark-front
	export LOGS_FOLLOW=
	${MAKE} logs

logs-front-follow:
	export LOGS_POD_PREFIX=kubeshark-front
	export LOGS_FOLLOW=--follow
	${MAKE} logs

logs:
	kubectl logs $$(kubectl get pods | awk '$$1 ~ /^$(LOGS_POD_PREFIX)/' | awk 'END {print $$1}') $(LOGS_CONTAINER) $(LOGS_FOLLOW)

ssh-node:
	kubectl ssh node $$(kubectl get nodes | awk 'END {print $$1}')

exec-worker:
	export EXEC_POD_PREFIX=kubeshark-worker-
	${MAKE} exec

exec-hub:
	export EXEC_POD_PREFIX=kubeshark-hub
	${MAKE} exec

exec-front:
	export EXEC_POD_PREFIX=kubeshark-front
	${MAKE} exec

exec:
	kubectl exec --stdin --tty $$(kubectl get pods | awk '$$1 ~ /^$(EXEC_POD_PREFIX)/' | awk 'END {print $$1}') -- /bin/sh

helm-install:
	cd helm-chart && helm install kubeshark . --set tap.docker.tag=$(TAG) && cd ..

helm-install-debug:
	cd helm-chart && helm install kubeshark . --set tap.docker.tag=$(TAG) --set tap.debug=true && cd ..

helm-install-profile:
	cd helm-chart && helm install kubeshark . --set tap.docker.tag=$(TAG) --set tap.pprof.enabled=true && cd ..

helm-uninstall:
	helm uninstall kubeshark

proxy:
	kubeshark proxy

port-forward:
	kubectl port-forward $$(kubectl get pods | awk '$$1 ~ /^$(POD_PREFIX)/' | awk 'END {print $$1}') $(SRC_PORT):$(DST_PORT)

release:
	@cd ../worker && git checkout master && git pull && git tag -d v$(VERSION); git tag v$(VERSION) && git push origin --tags
	@cd ../tracer && git checkout master && git pull && git tag -d v$(VERSION); git tag v$(VERSION) && git push origin --tags
	@cd ../hub && git checkout master && git pull && git tag -d v$(VERSION); git tag v$(VERSION) && git push origin --tags
	@cd ../front && git checkout master && git pull && git tag -d v$(VERSION); git tag v$(VERSION) && git push origin --tags
	@cd ../kubeshark && git checkout master && git pull && sed -i "s/^version:.*/version: \"$(shell echo $(VERSION) | sed -E 's/^([0-9]+\.[0-9]+\.[0-9]+)\..*/\1/')\"/" helm-chart/Chart.yaml && make
	@if [ "$(shell uname)" = "Darwin" ]; then \
		codesign --sign - --force --preserve-metadata=entitlements,requirements,flags,runtime ./bin/kubeshark__; \
	fi
	@make generate-helm-values && make generate-manifests
	@git add -A . && git commit -m ":bookmark: Bump the Helm chart version to $(VERSION)" && git push
	@git tag -d v$(VERSION); git tag v$(VERSION) && git push origin --tags
	@cd helm-chart && rm -rf ../../kubeshark.github.io/charts/chart && mkdir ../../kubeshark.github.io/charts/chart && cp -r . ../../kubeshark.github.io/charts/chart/
	@cd ../../kubeshark.github.io/ && git add -A . && git commit -m ":sparkles: Update the Helm chart" && git push
	@cd ../kubeshark

release-dry-run:
	@cd ../worker && git checkout master && git pull 
	@cd ../tracer && git checkout master && git pull 
	@cd ../hub && git checkout master && git pull
	@cd ../front && git checkout master && git pull 
	@cd ../kubeshark && git checkout master && git pull && sed -i "s/^version:.*/version: \"$(shell echo $(VERSION) | sed -E 's/^([0-9]+\.[0-9]+\.[0-9]+)\..*/\1/')\"/" helm-chart/Chart.yaml && make
	@if [ "$(shell uname)" = "Darwin" ]; then \
		codesign --sign - --force --preserve-metadata=entitlements,requirements,flags,runtime ./bin/kubeshark__; \
	fi
	@make generate-helm-values && make generate-manifests

branch:
	@cd ../worker && git checkout master && git pull && git checkout -b $(name); git push --set-upstream origin $(name)
	@cd ../hub && git checkout master && git pull && git checkout -b $(name); git push --set-upstream origin $(name)
	@cd ../front && git checkout master && git pull && git checkout -b $(name); git push --set-upstream origin $(name)

switch-to-branch:
	@cd ../worker && git checkout $(name)
	@cd ../hub && git checkout $(name)
	@cd ../front && git checkout $(name)
07070100000005000081A4000000000000000000000001689B9CB300000D98000000000000000000000000000000000000001F00000000kubeshark-cli-52.8.1/README.md<p align="center">
  <img src="https://raw.githubusercontent.com/kubeshark/assets/master/svg/kubeshark-logo.svg" alt="Kubeshark: Traffic analyzer for Kubernetes." height="128px"/>
</p>

<p align="center">
    <a href="https://github.com/kubeshark/kubeshark/releases/latest">
        <img alt="GitHub Latest Release" src="https://img.shields.io/github/v/release/kubeshark/kubeshark?logo=GitHub&style=flat-square">
    </a>
    <a href="https://hub.docker.com/r/kubeshark/worker">
      <img alt="Docker pulls" src="https://img.shields.io/docker/pulls/kubeshark/worker?color=%23099cec&logo=Docker&style=flat-square">
    </a>
    <a href="https://hub.docker.com/r/kubeshark/worker">
      <img alt="Image size" src="https://img.shields.io/docker/image-size/kubeshark/kubeshark/latest?logo=Docker&style=flat-square">
    </a>
    <a href="https://discord.gg/WkvRGMUcx7">
      <img alt="Discord" src="https://img.shields.io/discord/1042559155224973352?logo=Discord&style=flat-square&label=discord">
    </a>
    <a href="https://join.slack.com/t/kubeshark/shared_invite/zt-1m90td3n7-VHxN_~V5kVp80SfQW3SfpA">
      <img alt="Slack" src="https://img.shields.io/badge/slack-join_chat-green?logo=Slack&style=flat-square&label=slack">
    </a>
</p>

<p align="center">
  <b>
    Want to see Kubeshark in action right now? Visit this
    <a href="https://demo.kubeshark.co/">live demo deployment</a> of Kubeshark.
  </b>
</p>

**Kubeshark** is a network observability platform for Kubernetes, providing real-time, cluster-wide visibility into Kubernetes’ network. It enables users to inspect all internal and external cluster communications, API calls, and data in transit. Additionally, Kubeshark detects anomalies and emergent behaviors, trigger autonomous remediations, and generate deep network insights.

![Simple UI](https://github.com/kubeshark/assets/raw/master/png/kubeshark-ui.png)

Think [TCPDump](https://en.wikipedia.org/wiki/Tcpdump) and [Wireshark](https://www.wireshark.org/) reimagined for Kubernetes.

#### Service-Map w/Kubernetes Context

![Service Map with Kubernetes Context](https://github.com/kubeshark/assets/raw/master/png/kubeshark-servicemap.png)

#### Cluster-Wide PCAP Recording

![Cluster-Wide PCAP Recording](https://github.com/kubeshark/assets/raw/master/png/pcap-recording.png)

## Getting Started
Download **Kubeshark**'s binary distribution [latest release](https://github.com/kubeshark/kubeshark/releases/latest) or use one of the following methods to deploy **Kubeshark**. The [web-based dashboard](https://docs.kubeshark.co/en/ui) should open in your browser, showing a real-time view of your cluster's traffic.

### Homebrew

[Homebrew](https://brew.sh/) :beer: users can install the Kubeshark CLI with:

```shell
brew install kubeshark
kubeshark tap
```

To clean up:
```shell
kubeshark clean
```

### Helm

Add the Helm repository and install the chart:

```shell
helm repo add kubeshark https://helm.kubeshark.co
helm install kubeshark kubeshark/kubeshark
```
Follow the on-screen instructions how to connect to the dashboard.

To clean up:
```shell
helm uninstall kubeshark
```

## Building From Source

Clone this repository and run the `make` command to build it. After the build is complete, the executable can be found at `./bin/kubeshark`.

## Documentation

To learn more, read the [documentation](https://docs.kubeshark.co).

## Contributing

We :heart: pull requests! See [CONTRIBUTING.md](CONTRIBUTING.md) for the contribution guide.
07070100000006000081A4000000000000000000000001689B9CB3000003D5000000000000000000000000000000000000002900000000kubeshark-cli-52.8.1/RELEASE.md.TEMPLATE# Kubeshark release _VER_
Release notes coming soon ..

## Download Kubeshark for your platform

**Mac** (x86-64/Intel)
```
curl -Lo kubeshark https://github.com/kubeshark/kubeshark/releases/download/_VER_/kubeshark_darwin_amd64 && chmod 755 kubeshark
```

**Mac** (AArch64/Apple M1 silicon)
```
curl -Lo kubeshark https://github.com/kubeshark/kubeshark/releases/download/_VER_/kubeshark_darwin_arm64 && chmod 755 kubeshark
```

**Linux** (x86-64)
```
curl -Lo kubeshark https://github.com/kubeshark/kubeshark/releases/download/_VER_/kubeshark_linux_amd64 && chmod 755 kubeshark
```

**Linux** (AArch64)
```
curl -Lo kubeshark https://github.com/kubeshark/kubeshark/releases/download/_VER_/kubeshark_linux_arm64 && chmod 755 kubeshark
```

**Windows** (x86-64)
```
curl -LO https://github.com/kubeshark/kubeshark/releases/download/_VER_/kubeshark.exe
```

### Checksums
SHA256 checksums available for compiled binaries.
Run `shasum -a 256 -c kubeshark_OS_ARCH.sha256` to verify.


07070100000007000041ED000000000000000000000002689B9CB300000000000000000000000000000000000000000000001900000000kubeshark-cli-52.8.1/cmd07070100000008000081A4000000000000000000000001689B9CB300000453000000000000000000000000000000000000002200000000kubeshark-cli-52.8.1/cmd/clean.gopackage cmd

import (
	"fmt"

	"github.com/creasty/defaults"
	"github.com/kubeshark/kubeshark/config"
	"github.com/kubeshark/kubeshark/config/configStructs"
	"github.com/kubeshark/kubeshark/kubernetes/helm"
	"github.com/kubeshark/kubeshark/misc"
	"github.com/rs/zerolog/log"
	"github.com/spf13/cobra"
)

var cleanCmd = &cobra.Command{
	Use:   "clean",
	Short: fmt.Sprintf("Removes all %s resources", misc.Software),
	RunE: func(cmd *cobra.Command, args []string) error {
		resp, err := helm.NewHelm(
			config.Config.Tap.Release.Repo,
			config.Config.Tap.Release.Name,
			config.Config.Tap.Release.Namespace,
		).Uninstall()
		if err != nil {
			log.Error().Err(err).Send()
		} else {
			log.Info().Msgf("Uninstalled the Helm release: %s", resp.Release.Name)
		}
		return nil
	},
}

func init() {
	rootCmd.AddCommand(cleanCmd)

	defaultTapConfig := configStructs.TapConfig{}
	if err := defaults.Set(&defaultTapConfig); err != nil {
		log.Debug().Err(err).Send()
	}

	cleanCmd.Flags().StringP(configStructs.ReleaseNamespaceLabel, "s", defaultTapConfig.Release.Namespace, "Release namespace of Kubeshark")
}
07070100000009000081A4000000000000000000000001689B9CB300001077000000000000000000000000000000000000002300000000kubeshark-cli-52.8.1/cmd/common.gopackage cmd

import (
	"context"
	"errors"
	"fmt"
	"path"
	"regexp"
	"time"

	"github.com/kubeshark/kubeshark/config"
	"github.com/kubeshark/kubeshark/errormessage"
	"github.com/kubeshark/kubeshark/internal/connect"
	"github.com/kubeshark/kubeshark/kubernetes"
	"github.com/kubeshark/kubeshark/misc"
	"github.com/kubeshark/kubeshark/misc/fsUtils"
	"github.com/rs/zerolog/log"
)

func startProxyReportErrorIfAny(kubernetesProvider *kubernetes.Provider, ctx context.Context, serviceName string, podName string, proxyPortLabel string, srcPort uint16, dstPort uint16, healthCheck string) {
	httpServer, err := kubernetes.StartProxy(kubernetesProvider, config.Config.Tap.Proxy.Host, srcPort, config.Config.Tap.Release.Namespace, serviceName)
	if err != nil {
		log.Error().
			Err(errormessage.FormatError(err)).
			Msg(fmt.Sprintf("Error occurred while running K8s proxy. Try setting different port using --%s", proxyPortLabel))
		return
	}

	connector := connect.NewConnector(kubernetes.GetProxyOnPort(srcPort), connect.DefaultRetries, connect.DefaultTimeout)
	if err := connector.TestConnection(healthCheck); err != nil {
		log.Warn().
			Str("service", serviceName).
			Msg("Couldn't connect using proxy, stopping proxy and trying to create port-forward...")
		if err := httpServer.Shutdown(ctx); err != nil {
			log.Error().
				Err(errormessage.FormatError(err)).
				Msg("Error occurred while stopping proxy.")
		}

		podRegex, _ := regexp.Compile(podName)
		if _, err := kubernetes.NewPortForward(kubernetesProvider, config.Config.Tap.Release.Namespace, podRegex, srcPort, dstPort, ctx); err != nil {
			log.Error().
				Str("pod-regex", podRegex.String()).
				Err(errormessage.FormatError(err)).
				Msg(fmt.Sprintf("Error occurred while running port forward. Try setting different port using --%s", proxyPortLabel))
			return
		}

		connector = connect.NewConnector(kubernetes.GetProxyOnPort(srcPort), connect.DefaultRetries, connect.DefaultTimeout)
		if err := connector.TestConnection(healthCheck); err != nil {
			log.Error().
				Str("service", serviceName).
				Err(errormessage.FormatError(err)).
				Msg("Couldn't connect to service.")
			return
		}
	}
}

func getKubernetesProviderForCli(silent bool, dontCheckVersion bool) (*kubernetes.Provider, error) {
	kubeConfigPath := config.Config.KubeConfigPath()
	kubernetesProvider, err := kubernetes.NewProvider(kubeConfigPath, config.Config.Kube.Context)
	if err != nil {
		handleKubernetesProviderError(err)
		return nil, err
	}

	if !silent {
		log.Info().Str("path", kubeConfigPath).Msg("Using kubeconfig:")
	}

	if err := kubernetesProvider.ValidateNotProxy(); err != nil {
		handleKubernetesProviderError(err)
		return nil, err
	}

	if !dontCheckVersion {
		kubernetesVersion, err := kubernetesProvider.GetKubernetesVersion()
		if err != nil {
			handleKubernetesProviderError(err)
			return nil, err
		}

		if err := kubernetes.ValidateKubernetesVersion(kubernetesVersion); err != nil {
			handleKubernetesProviderError(err)
			return nil, err
		}
	}

	return kubernetesProvider, nil
}

func handleKubernetesProviderError(err error) {
	var clusterBehindProxyErr *kubernetes.ClusterBehindProxyError
	if ok := errors.As(err, &clusterBehindProxyErr); ok {
		log.Error().Msg(fmt.Sprintf("Cannot establish http-proxy connection to the Kubernetes cluster. If you’re using Lens or similar tool, please run '%s' with regular kubectl config using --%v %v=$HOME/.kube/config flag", misc.Program, config.SetCommandName, config.KubeConfigPathConfigName))
	} else {
		log.Error().Err(err).Send()
	}
}

func finishSelfExecution(kubernetesProvider *kubernetes.Provider) {
	removalCtx, cancel := context.WithTimeout(context.Background(), cleanupTimeout)
	defer cancel()
	dumpLogsIfNeeded(removalCtx, kubernetesProvider)
}

func dumpLogsIfNeeded(ctx context.Context, kubernetesProvider *kubernetes.Provider) {
	if !config.Config.DumpLogs {
		return
	}
	dotDir := misc.GetDotFolderPath()
	filePath := path.Join(dotDir, fmt.Sprintf("%s_logs_%s.zip", misc.Program, time.Now().Format("2006_01_02__15_04_05")))
	if err := fsUtils.DumpLogs(ctx, kubernetesProvider, filePath, config.Config.Logs.Grep); err != nil {
		log.Error().Err(err).Msg("Failed to dump logs.")
	}
}
0707010000000A000081A4000000000000000000000001689B9CB30000065E000000000000000000000000000000000000002300000000kubeshark-cli-52.8.1/cmd/config.gopackage cmd

import (
	"fmt"

	"github.com/creasty/defaults"
	"github.com/kubeshark/kubeshark/config"
	"github.com/kubeshark/kubeshark/config/configStructs"
	"github.com/kubeshark/kubeshark/misc"
	"github.com/kubeshark/kubeshark/utils"
	"github.com/rs/zerolog/log"
	"github.com/spf13/cobra"
)

var configCmd = &cobra.Command{
	Use:   "config",
	Short: fmt.Sprintf("Generate %s config with default values", misc.Software),
	RunE: func(cmd *cobra.Command, args []string) error {
		if config.Config.Config.Regenerate {
			defaultConfig := config.CreateDefaultConfig()
			if err := defaults.Set(&defaultConfig); err != nil {
				log.Error().Err(err).Send()
				return nil
			}
			if err := config.WriteConfig(&defaultConfig); err != nil {
				log.Error().Err(err).Msg("Failed generating config with defaults.")
				return nil
			}

			log.Info().Str("config-path", config.ConfigFilePath).Msg("Template file written to config path.")
		} else {
			template, err := utils.PrettyYaml(config.Config)
			if err != nil {
				log.Error().Err(err).Msg("Failed converting config with defaults to YAML.")
				return nil
			}

			log.Debug().Str("template", template).Msg("Printing template config...")
			fmt.Printf("%v", template)
		}

		return nil
	},
}

func init() {
	rootCmd.AddCommand(configCmd)

	defaultConfig := config.CreateDefaultConfig()
	if err := defaults.Set(&defaultConfig); err != nil {
		log.Debug().Err(err).Send()
	}

	configCmd.Flags().BoolP(configStructs.RegenerateConfigName, "r", defaultConfig.Config.Regenerate, fmt.Sprintf("Regenerate the config file with default values to path %s", config.GetConfigFilePath(nil)))
}
0707010000000B000081A4000000000000000000000001689B9CB3000010D4000000000000000000000000000000000000002400000000kubeshark-cli-52.8.1/cmd/console.gopackage cmd

import (
	"fmt"
	"net/http"
	"net/url"
	"os"
	"os/signal"
	"strings"
	"time"

	"github.com/creasty/defaults"
	"github.com/gorilla/websocket"
	"github.com/kubeshark/kubeshark/config"
	"github.com/kubeshark/kubeshark/config/configStructs"
	"github.com/kubeshark/kubeshark/kubernetes"
	"github.com/kubeshark/kubeshark/utils"
	"github.com/rs/zerolog/log"
	"github.com/spf13/cobra"
)

var consoleCmd = &cobra.Command{
	Use:   "console",
	Short: "Stream the scripting console logs into shell",
	RunE: func(cmd *cobra.Command, args []string) error {
		runConsole()
		return nil
	},
}

func init() {
	rootCmd.AddCommand(consoleCmd)

	defaultTapConfig := configStructs.TapConfig{}
	if err := defaults.Set(&defaultTapConfig); err != nil {
		log.Debug().Err(err).Send()
	}

	consoleCmd.Flags().Uint16(configStructs.ProxyFrontPortLabel, defaultTapConfig.Proxy.Front.Port, "Provide a custom port for the Kubeshark")
	consoleCmd.Flags().String(configStructs.ProxyHostLabel, defaultTapConfig.Proxy.Host, "Provide a custom host for the Kubeshark")
	consoleCmd.Flags().StringP(configStructs.ReleaseNamespaceLabel, "s", defaultTapConfig.Release.Namespace, "Release namespace of Kubeshark")
}

func runConsoleWithoutProxy() {
	log.Info().Msg("Starting scripting console ...")
	time.Sleep(5 * time.Second)
	hubUrl := kubernetes.GetHubUrl()
	for {

		// Attempt to connect to the Hub every second
		response, err := http.Get(fmt.Sprintf("%s/echo", hubUrl))
		if err != nil || response.StatusCode != 200 {
			log.Info().Msg(fmt.Sprintf(utils.Yellow, "Couldn't connect to Hub."))
			time.Sleep(5 * time.Second)
			continue
		}

		interrupt := make(chan os.Signal, 1)
		signal.Notify(interrupt, os.Interrupt)

		log.Info().Str("host", config.Config.Tap.Proxy.Host).Str("url", hubUrl).Msg("Connecting to:")
		u := url.URL{
			Scheme: "ws",
			Host:   fmt.Sprintf("%s:%d", config.Config.Tap.Proxy.Host, config.Config.Tap.Proxy.Front.Port),
			Path:   "/api/scripts/logs",
		}
		headers := http.Header{}
		headers.Set(utils.X_KUBESHARK_CAPTURE_HEADER_KEY, utils.X_KUBESHARK_CAPTURE_HEADER_IGNORE_VALUE)
		headers.Set("License-Key", config.Config.License)

		c, _, err := websocket.DefaultDialer.Dial(u.String(), headers)
		if err != nil {
			log.Error().Err(err).Msg("Websocket dial error, retrying in 5 seconds...")
			time.Sleep(5 * time.Second) // Delay before retrying
			continue
		}
		defer c.Close()

		done := make(chan struct{})

		go func() {
			defer close(done)
			for {
				_, message, err := c.ReadMessage()
				if err != nil {
					log.Error().Err(err).Msg("Error reading websocket message, reconnecting...")
					break // Break to reconnect
				}

				msg := string(message)
				if strings.Contains(msg, ":ERROR]") {
					msg = fmt.Sprintf(utils.Red, msg)
					fmt.Fprintln(os.Stderr, msg)
				} else {
					fmt.Fprintln(os.Stdout, msg)
				}
			}
		}()

		ticker := time.NewTicker(time.Second)
		defer ticker.Stop()

		select {
		case <-done:
			log.Warn().Msg(fmt.Sprintf(utils.Yellow, "Connection closed, reconnecting..."))
			time.Sleep(5 * time.Second) // Delay before reconnecting
			continue                    // Reconnect after error
		case <-interrupt:
			err := c.WriteMessage(websocket.CloseMessage, websocket.FormatCloseMessage(websocket.CloseNormalClosure, ""))
			if err != nil {
				log.Error().Err(err).Send()
				continue
			}

			select {
			case <-done:
			case <-time.After(time.Second):
			}
			return
		}
	}
}

func runConsole() {
	go runConsoleWithoutProxy()

	// Create interrupt channel and setup signal handling once
	interrupt := make(chan os.Signal, 1)
	signal.Notify(interrupt, os.Interrupt)
	done := make(chan struct{})

	ticker := time.NewTicker(5 * time.Second)
	defer ticker.Stop()

	for {
		select {
		case <-interrupt:
			// Handle interrupt and exit gracefully
			log.Warn().Msg(fmt.Sprintf(utils.Yellow, "Received interrupt, exiting..."))
			select {
			case <-done:
			case <-time.After(time.Second):
			}
			return

		case <-ticker.C:
			// Attempt to connect to the Hub every second
			hubUrl := kubernetes.GetHubUrl()
			response, err := http.Get(fmt.Sprintf("%s/echo", hubUrl))
			if err != nil || response.StatusCode != 200 {
				log.Info().Msg(fmt.Sprintf(utils.Yellow, "Couldn't connect to Hub. Establishing proxy..."))
				runProxy(false, true)
			}
		}
	}
}
0707010000000C000081A4000000000000000000000001689B9CB300000163000000000000000000000000000000000000002400000000kubeshark-cli-52.8.1/cmd/license.gopackage cmd

import (
	"fmt"

	"github.com/kubeshark/kubeshark/config"
	"github.com/spf13/cobra"
)

var licenseCmd = &cobra.Command{
	Use:   "license",
	Short: "Print the license loaded string",
	RunE: func(cmd *cobra.Command, args []string) error {
		fmt.Println(config.Config.License)
		return nil
	},
}

func init() {
	rootCmd.AddCommand(licenseCmd)
}
0707010000000D000081A4000000000000000000000001689B9CB300000640000000000000000000000000000000000000002100000000kubeshark-cli-52.8.1/cmd/logs.gopackage cmd

import (
	"context"
	"fmt"

	"github.com/creasty/defaults"
	"github.com/kubeshark/kubeshark/config"
	"github.com/kubeshark/kubeshark/config/configStructs"
	"github.com/kubeshark/kubeshark/errormessage"
	"github.com/kubeshark/kubeshark/misc"
	"github.com/kubeshark/kubeshark/misc/fsUtils"
	"github.com/rs/zerolog/log"
	"github.com/spf13/cobra"
)

var logsCmd = &cobra.Command{
	Use:   "logs",
	Short: "Create a ZIP file with logs for GitHub issues or troubleshooting",
	RunE: func(cmd *cobra.Command, args []string) error {
		kubernetesProvider, err := getKubernetesProviderForCli(false, false)
		if err != nil {
			return nil
		}
		ctx := context.Background()

		if validationErr := config.Config.Logs.Validate(); validationErr != nil {
			return errormessage.FormatError(validationErr)
		}

		log.Debug().Str("logs-path", config.Config.Logs.FilePath()).Msg("Using this logs path...")

		if dumpLogsErr := fsUtils.DumpLogs(ctx, kubernetesProvider, config.Config.Logs.FilePath(), config.Config.Logs.Grep); dumpLogsErr != nil {
			log.Error().Err(dumpLogsErr).Msg("Failed to dump logs.")
		}

		return nil
	},
}

func init() {
	rootCmd.AddCommand(logsCmd)

	defaultLogsConfig := configStructs.LogsConfig{}
	if err := defaults.Set(&defaultLogsConfig); err != nil {
		log.Debug().Err(err).Send()
	}

	logsCmd.Flags().StringP(configStructs.FileLogsName, "f", defaultLogsConfig.FileStr, fmt.Sprintf("Path for zip file (default current <pwd>\\%s_logs.zip)", misc.Program))
	logsCmd.Flags().StringP(configStructs.GrepLogsName, "g", defaultLogsConfig.Grep, "Regexp to do grepping on the logs")
}
0707010000000E000081A4000000000000000000000001689B9CB300000D83000000000000000000000000000000000000002500000000kubeshark-cli-52.8.1/cmd/pcapDump.gopackage cmd

import (
	"errors"
	"fmt"
	"os"
	"path/filepath"
	"time"

	"github.com/creasty/defaults"
	"github.com/kubeshark/kubeshark/config/configStructs"
	"github.com/rs/zerolog"
	"github.com/rs/zerolog/log"
	"github.com/spf13/cobra"
	"k8s.io/client-go/kubernetes"
	"k8s.io/client-go/tools/clientcmd"
	"k8s.io/client-go/util/homedir"
)

// pcapDumpCmd represents the consolidated pcapdump command
var pcapDumpCmd = &cobra.Command{
	Use:   "pcapdump",
	Short: "Store all captured traffic (including decrypted TLS) in a PCAP file.",
	RunE: func(cmd *cobra.Command, args []string) error {
		// Retrieve the kubeconfig path from the flag
		kubeconfig, _ := cmd.Flags().GetString(configStructs.PcapKubeconfig)

		// If kubeconfig is not provided, use the default location
		if kubeconfig == "" {
			if home := homedir.HomeDir(); home != "" {
				kubeconfig = filepath.Join(home, ".kube", "config")
			} else {
				return errors.New("kubeconfig flag not provided and no home directory available for default config location")
			}
		}

		debugEnabled, _ := cmd.Flags().GetBool("debug")
		if debugEnabled {
			zerolog.SetGlobalLevel(zerolog.DebugLevel)
			log.Debug().Msg("Debug logging enabled")
		} else {
			zerolog.SetGlobalLevel(zerolog.InfoLevel)
		}

		// Use the current context in kubeconfig
		config, err := clientcmd.BuildConfigFromFlags("", kubeconfig)
		if err != nil {
			return fmt.Errorf("Error building kubeconfig: %w", err)
		}

		clientset, err := kubernetes.NewForConfig(config)
		if err != nil {
			return fmt.Errorf("Error creating Kubernetes client: %w", err)
		}

		// Parse the `--time` flag
		timeIntervalStr, _ := cmd.Flags().GetString("time")
		var cutoffTime *time.Time // Use a pointer to distinguish between provided and not provided
		if timeIntervalStr != "" {
			duration, err := time.ParseDuration(timeIntervalStr)
			if err != nil {
				return fmt.Errorf("Invalid format %w", err)
			}
			tempCutoffTime := time.Now().Add(-duration)
			cutoffTime = &tempCutoffTime
		}

		// Test the dest dir if provided
		destDir, _ := cmd.Flags().GetString(configStructs.PcapDest)
		if destDir != "" {
			info, err := os.Stat(destDir)
			if os.IsNotExist(err) {
				return fmt.Errorf("Directory does not exist: %s", destDir)
			}
			if err != nil {
				return fmt.Errorf("Error checking dest directory: %w", err)
			}
			if !info.IsDir() {
				return fmt.Errorf("Dest path is not a directory: %s", destDir)
			}
			tempFile, err := os.CreateTemp(destDir, "write-test-*")
			if err != nil {
				return fmt.Errorf("Directory %s is not writable", destDir)
			}
			_ = os.Remove(tempFile.Name())
		}

		log.Info().Msg("Copying PCAP files")
		err = copyPcapFiles(clientset, config, destDir, cutoffTime)
		if err != nil {
			return err
		}

		return nil
	},
}

func init() {
	rootCmd.AddCommand(pcapDumpCmd)

	defaultPcapDumpConfig := configStructs.PcapDumpConfig{}
	if err := defaults.Set(&defaultPcapDumpConfig); err != nil {
		log.Debug().Err(err).Send()
	}

	pcapDumpCmd.Flags().String(configStructs.PcapTime, "", "Time interval (e.g., 10m, 1h) in the past for which the pcaps are copied")
	pcapDumpCmd.Flags().String(configStructs.PcapDest, "", "Local destination path for copied PCAP files (can not be used together with --enabled)")
	pcapDumpCmd.Flags().String(configStructs.PcapKubeconfig, "", "Path for kubeconfig (if not provided the default location will be checked)")
	pcapDumpCmd.Flags().Bool("debug", false, "Enable debug logging")
}
0707010000000F000081A4000000000000000000000001689B9CB30000280A000000000000000000000000000000000000002B00000000kubeshark-cli-52.8.1/cmd/pcapDumpRunner.gopackage cmd

import (
	"bufio"
	"bytes"
	"context"
	"errors"
	"fmt"
	"io"
	"os"
	"path/filepath"
	"strings"
	"sync"
	"time"

	"github.com/kubeshark/gopacket/pcapgo"
	"github.com/rs/zerolog/log"
	corev1 "k8s.io/api/core/v1"
	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
	"k8s.io/client-go/kubernetes"
	clientk8s "k8s.io/client-go/kubernetes"
	"k8s.io/client-go/rest"
	"k8s.io/client-go/tools/remotecommand"
)

const (
	label                 = "app.kubeshark.co/app=worker"
	srcDir                = "pcapdump"
	maxSnaplen     uint32 = 262144
	maxTimePerFile        = time.Minute * 5
)

// PodFileInfo represents information about a pod, its namespace, and associated files
type PodFileInfo struct {
	Pod         corev1.Pod
	SrcDir      string
	Files       []string
	CopiedFiles []string
}

// listWorkerPods fetches all worker pods from multiple namespaces
func listWorkerPods(ctx context.Context, clientset *clientk8s.Clientset, namespaces []string) ([]*PodFileInfo, error) {
	var podFileInfos []*PodFileInfo
	var errs []error
	labelSelector := label

	for _, namespace := range namespaces {
		// List all pods matching the label in the current namespace
		pods, err := clientset.CoreV1().Pods(namespace).List(ctx, metav1.ListOptions{
			LabelSelector: labelSelector,
		})
		if err != nil {
			errs = append(errs, fmt.Errorf("failed to list worker pods in namespace %s: %w", namespace, err))
			continue
		}

		for _, pod := range pods.Items {
			podFileInfos = append(podFileInfos, &PodFileInfo{
				Pod: pod,
			})
		}
	}

	return podFileInfos, errors.Join(errs...)
}

// listFilesInPodDir lists all files in the specified directory inside the pod across multiple namespaces
func listFilesInPodDir(ctx context.Context, clientset *clientk8s.Clientset, config *rest.Config, pod *PodFileInfo, cutoffTime *time.Time) error {
	nodeName := pod.Pod.Spec.NodeName
	srcFilePath := filepath.Join("data", nodeName, srcDir)

	cmd := []string{"ls", srcFilePath}
	req := clientset.CoreV1().RESTClient().Post().
		Resource("pods").
		Name(pod.Pod.Name).
		Namespace(pod.Pod.Namespace).
		SubResource("exec").
		Param("container", "sniffer").
		Param("stdout", "true").
		Param("stderr", "true").
		Param("command", cmd[0]).
		Param("command", cmd[1])

	exec, err := remotecommand.NewSPDYExecutor(config, "POST", req.URL())
	if err != nil {
		return err
	}

	var stdoutBuf bytes.Buffer
	var stderrBuf bytes.Buffer

	// Execute the command to list files
	err = exec.StreamWithContext(ctx, remotecommand.StreamOptions{
		Stdout: &stdoutBuf,
		Stderr: &stderrBuf,
	})
	if err != nil {
		return err
	}

	// Split the output (file names) into a list
	files := strings.Split(strings.TrimSpace(stdoutBuf.String()), "\n")
	if len(files) == 0 {
		// No files were found in the target dir for this pod
		return nil
	}

	var filteredFiles []string
	var fileProcessingErrs []error
	// Filter files based on cutoff time if provided
	for _, file := range files {
		if cutoffTime != nil {
			parts := strings.Split(file, "-")
			if len(parts) < 2 {
				continue
			}

			timestampStr := parts[len(parts)-2] + parts[len(parts)-1][:6] // Extract YYYYMMDDHHMMSS
			fileTime, err := time.Parse("20060102150405", timestampStr)
			if err != nil {
				fileProcessingErrs = append(fileProcessingErrs, fmt.Errorf("failed parse file timestamp %s: %w", file, err))
				continue
			}

			if fileTime.Before(*cutoffTime) {
				continue
			}
		}
		// Add file to filtered list
		filteredFiles = append(filteredFiles, file)
	}

	pod.SrcDir = srcDir
	pod.Files = filteredFiles

	return errors.Join(fileProcessingErrs...)
}

// copyFileFromPod copies a single file from a pod to a local destination
func copyFileFromPod(ctx context.Context, clientset *kubernetes.Clientset, config *rest.Config, pod *PodFileInfo, srcFile, destFile string) error {
	// Construct the complete path using /data, the node name, srcDir, and srcFile
	nodeName := pod.Pod.Spec.NodeName
	srcFilePath := filepath.Join("data", nodeName, srcDir, srcFile)

	// Execute the `cat` command to read the file at the srcFilePath
	cmd := []string{"cat", srcFilePath}
	req := clientset.CoreV1().RESTClient().Post().
		Resource("pods").
		Name(pod.Pod.Name).
		Namespace(pod.Pod.Namespace).
		SubResource("exec").
		Param("container", "sniffer").
		Param("stdout", "true").
		Param("stderr", "true").
		Param("command", cmd[0]).
		Param("command", cmd[1])

	exec, err := remotecommand.NewSPDYExecutor(config, "POST", req.URL())
	if err != nil {
		return fmt.Errorf("failed to initialize executor for pod %s in namespace %s: %w", pod.Pod.Name, pod.Pod.Namespace, err)
	}

	// Create the local file to write the content to
	outFile, err := os.Create(destFile)
	if err != nil {
		return fmt.Errorf("failed to create destination file: %w", err)
	}
	defer outFile.Close()

	// Capture stderr for error logging
	var stderrBuf bytes.Buffer

	// Stream the file content from the pod to the local file
	err = exec.StreamWithContext(ctx, remotecommand.StreamOptions{
		Stdout: outFile,
		Stderr: &stderrBuf,
	})
	if err != nil {
		return err
	}

	return nil
}

func mergePCAPs(outputFile string, inputFiles []string) error {
	// Create the output file
	f, err := os.Create(outputFile)
	if err != nil {
		return fmt.Errorf("failed to create output file: %w", err)
	}
	defer f.Close()

	bufWriter := bufio.NewWriterSize(f, 4*1024*1024)
	defer bufWriter.Flush()

	// Create the PCAP writer
	writer := pcapgo.NewWriter(bufWriter)
	err = writer.WriteFileHeader(maxSnaplen, 1)
	if err != nil {
		return fmt.Errorf("failed to write PCAP file header: %w", err)
	}

	var mergingErrs []error

	for _, inputFile := range inputFiles {
		// Open the input file
		file, err := os.Open(inputFile)
		if err != nil {
			mergingErrs = append(mergingErrs, fmt.Errorf("failed to open %s: %w", inputFile, err))
			continue
		}

		fileInfo, err := file.Stat()
		if err != nil {
			mergingErrs = append(mergingErrs, fmt.Errorf("failed to stat file %s: %w", inputFile, err))
			file.Close()
			continue
		}

		if fileInfo.Size() == 0 {
			// Skip empty files
			log.Debug().Msgf("Skipped empty file: %s", inputFile)
			file.Close()
			continue
		}

		// Create the PCAP reader for the input file
		reader, err := pcapgo.NewReader(file)
		if err != nil {
			mergingErrs = append(mergingErrs, fmt.Errorf("failed to create pcapng reader for %v: %w", file.Name(), err))
			file.Close()
			continue
		}

		for {
			// Read packet data
			data, ci, err := reader.ReadPacketData()
			if err != nil {
				if errors.Is(err, io.EOF) || errors.Is(err, io.ErrUnexpectedEOF) {
					break
				}
				mergingErrs = append(mergingErrs, fmt.Errorf("error reading packet from file %s: %w", file.Name(), err))
				break
			}

			// Write the packet to the output file
			err = writer.WritePacket(ci, data)
			if err != nil {
				log.Error().Err(err).Msgf("Error writing packet to output file")
				mergingErrs = append(mergingErrs, fmt.Errorf("error writing packet to output file: %w", err))
				break
			}
		}

		file.Close()
	}

	log.Debug().Err(errors.Join(mergingErrs...))

	return nil
}

func copyPcapFiles(clientset *kubernetes.Clientset, config *rest.Config, destDir string, cutoffTime *time.Time) error {
	// List all namespaces
	namespaceList, err := clientset.CoreV1().Namespaces().List(context.TODO(), metav1.ListOptions{})
	if err != nil {
		return err
	}

	var targetNamespaces []string
	for _, ns := range namespaceList.Items {
		targetNamespaces = append(targetNamespaces, ns.Name)
	}

	// List all worker pods
	workerPods, err := listWorkerPods(context.Background(), clientset, targetNamespaces)
	if err != nil {
		if len(workerPods) == 0 {
			return err
		}
		log.Debug().Err(err).Msg("error while listing worker pods")
	}

	var wg sync.WaitGroup

	// Launch a goroutine for each pod
	for _, pod := range workerPods {
		wg.Add(1)

		go func(pod *PodFileInfo) {
			defer wg.Done()

			// List files for the current pod
			err := listFilesInPodDir(context.Background(), clientset, config, pod, cutoffTime)
			if err != nil {
				log.Debug().Err(err).Msgf("error listing files in pod %s", pod.Pod.Name)
				return
			}

			// Copy files from the pod
			for _, file := range pod.Files {
				destFile := filepath.Join(destDir, file)

				// Add a timeout context for file copy
				ctx, cancel := context.WithTimeout(context.Background(), maxTimePerFile)
				err := copyFileFromPod(ctx, clientset, config, pod, file, destFile)
				cancel()
				if err != nil {
					log.Debug().Err(err).Msgf("error copying file %s from pod %s in namespace %s", file, pod.Pod.Name, pod.Pod.Namespace)
					continue
				}

				log.Info().Msgf("Copied file %s from pod %s to %s", file, pod.Pod.Name, destFile)
				pod.CopiedFiles = append(pod.CopiedFiles, destFile)
			}
		}(pod)
	}

	// Wait for all goroutines to complete
	wg.Wait()

	var copiedFiles []string
	for _, pod := range workerPods {
		copiedFiles = append(copiedFiles, pod.CopiedFiles...)
	}

	if len(copiedFiles) == 0 {
		log.Info().Msg("No pcaps available to copy on the workers")
		return nil
	}

	// Generate a temporary filename for the merged file
	tempMergedFile := copiedFiles[0] + "_temp"

	// Merge PCAP files
	err = mergePCAPs(tempMergedFile, copiedFiles)
	if err != nil {
		os.Remove(tempMergedFile)
		return fmt.Errorf("error merging files: %w", err)
	}

	// Remove the original files after merging
	for _, file := range copiedFiles {
		if err = os.Remove(file); err != nil {
			log.Debug().Err(err).Msgf("error removing file %s", file)
		}
	}

	clusterID, err := getClusterID(clientset)
	if err != nil {
		return fmt.Errorf("failed to get cluster ID: %w", err)
	}
	timestamp := time.Now().Format("2006-01-02_15-04")
	// Rename the temp file to the final name
	finalMergedFile := filepath.Join(destDir, fmt.Sprintf("%s-%s.pcap", clusterID, timestamp))
	err = os.Rename(tempMergedFile, finalMergedFile)
	if err != nil {
		return err
	}

	log.Info().Msgf("Merged file created: %s", finalMergedFile)
	return nil
}

func getClusterID(clientset *kubernetes.Clientset) (string, error) {
	namespace, err := clientset.CoreV1().Namespaces().Get(context.TODO(), "kube-system", metav1.GetOptions{})
	if err != nil {
		return "", fmt.Errorf("failed to get kube-system namespace UID: %w", err)
	}
	return string(namespace.UID), nil
}
07070100000010000081A4000000000000000000000001689B9CB3000004E9000000000000000000000000000000000000002200000000kubeshark-cli-52.8.1/cmd/pprof.gopackage cmd

import (
	"github.com/creasty/defaults"
	"github.com/kubeshark/kubeshark/config/configStructs"
	"github.com/rs/zerolog/log"
	"github.com/spf13/cobra"
)

var pprofCmd = &cobra.Command{
	Use:   "pprof",
	Short: "Select a Kubeshark container and open the pprof web UI in the browser",
	RunE: func(cmd *cobra.Command, args []string) error {
		runPprof()
		return nil
	},
}

func init() {
	rootCmd.AddCommand(pprofCmd)

	defaultTapConfig := configStructs.TapConfig{}
	if err := defaults.Set(&defaultTapConfig); err != nil {
		log.Debug().Err(err).Send()
	}

	pprofCmd.Flags().Uint16(configStructs.ProxyFrontPortLabel, defaultTapConfig.Proxy.Front.Port, "Provide a custom port for the proxy/port-forward")
	pprofCmd.Flags().String(configStructs.ProxyHostLabel, defaultTapConfig.Proxy.Host, "Provide a custom host for the proxy/port-forward")
	pprofCmd.Flags().StringP(configStructs.ReleaseNamespaceLabel, "s", defaultTapConfig.Release.Namespace, "Release namespace of Kubeshark")
	pprofCmd.Flags().Uint16(configStructs.PprofPortLabel, defaultTapConfig.Pprof.Port, "Provide a custom port for the pprof server")
	pprofCmd.Flags().String(configStructs.PprofViewLabel, defaultTapConfig.Pprof.View, "Change the default view of the pprof web interface")
}
07070100000011000081A4000000000000000000000001689B9CB3000011F3000000000000000000000000000000000000002800000000kubeshark-cli-52.8.1/cmd/pprofRunner.gopackage cmd

import (
	"context"
	"fmt"

	"github.com/go-cmd/cmd"
	"github.com/kubeshark/kubeshark/config"
	"github.com/kubeshark/kubeshark/kubernetes"
	"github.com/kubeshark/kubeshark/utils"
	"github.com/rivo/tview"
	"github.com/rs/zerolog/log"
	v1 "k8s.io/api/core/v1"
)

func runPprof() {
	runProxy(false, true)

	provider, err := getKubernetesProviderForCli(false, false)
	if err != nil {
		return
	}

	ctx, cancel := context.WithCancel(context.Background())
	defer cancel()

	hubPods, err := provider.ListPodsByAppLabel(ctx, config.Config.Tap.Release.Namespace, map[string]string{kubernetes.AppLabelKey: "hub"})
	if err != nil {
		log.Error().
			Err(err).
			Msg("Failed to list hub pods!")
		cancel()
		return
	}

	workerPods, err := provider.ListPodsByAppLabel(ctx, config.Config.Tap.Release.Namespace, map[string]string{kubernetes.AppLabelKey: "worker"})
	if err != nil {
		log.Error().
			Err(err).
			Msg("Failed to list worker pods!")
		cancel()
		return
	}

	fullscreen := true

	app := tview.NewApplication()
	list := tview.NewList()

	var currentCmd *cmd.Cmd

	i := 48
	for _, pod := range hubPods {
		for _, container := range pod.Spec.Containers {
			log.Info().Str("pod", pod.Name).Str("container", container.Name).Send()
			homeUrl := fmt.Sprintf("%s/debug/pprof/", kubernetes.GetHubUrl())
			modal := buildNewModal(
				pod,
				container,
				homeUrl,
				app,
				list,
				fullscreen,
				currentCmd,
			)
			list.AddItem(fmt.Sprintf("pod: %s container: %s", pod.Name, container.Name), pod.Spec.NodeName, rune(i), func() {
				app.SetRoot(modal, fullscreen)
			})
			i++
		}
	}

	for _, pod := range workerPods {
		for _, container := range pod.Spec.Containers {
			log.Info().Str("pod", pod.Name).Str("container", container.Name).Send()
			homeUrl := fmt.Sprintf("%s/pprof/%s/%s/", kubernetes.GetHubUrl(), pod.Status.HostIP, container.Name)
			modal := buildNewModal(
				pod,
				container,
				homeUrl,
				app,
				list,
				fullscreen,
				currentCmd,
			)
			list.AddItem(fmt.Sprintf("pod: %s container: %s", pod.Name, container.Name), pod.Spec.NodeName, rune(i), func() {
				app.SetRoot(modal, fullscreen)
			})
			i++
		}
	}

	list.AddItem("Quit", "Press to exit", 'q', func() {
		if currentCmd != nil {
			err = currentCmd.Stop()
			if err != nil {
				log.Error().Err(err).Str("name", currentCmd.Name).Msg("Failed to stop process!")
			}
		}
		app.Stop()
	})

	if err := app.SetRoot(list, fullscreen).EnableMouse(true).Run(); err != nil {
		panic(err)
	}
}

func buildNewModal(
	pod v1.Pod,
	container v1.Container,
	homeUrl string,
	app *tview.Application,
	list *tview.List,
	fullscreen bool,
	currentCmd *cmd.Cmd,
) *tview.Modal {
	return tview.NewModal().
		SetText(fmt.Sprintf("pod: %s container: %s", pod.Name, container.Name)).
		AddButtons([]string{
			"Open Debug Home Page",
			"Profile: CPU",
			"Profile: Memory",
			"Profile: Goroutine",
			"Cancel",
		}).
		SetDoneFunc(func(buttonIndex int, buttonLabel string) {
			var err error
			port := fmt.Sprintf(":%d", config.Config.Tap.Pprof.Port)
			view := fmt.Sprintf("http://localhost%s/ui/%s", port, config.Config.Tap.Pprof.View)

			switch buttonLabel {
			case "Open Debug Home Page":
				utils.OpenBrowser(homeUrl)
			case "Profile: CPU":
				if currentCmd != nil {
					err = currentCmd.Stop()
					if err != nil {
						log.Error().Err(err).Str("name", currentCmd.Name).Msg("Failed to stop process!")
					}
				}
				currentCmd = cmd.NewCmd("go", "tool", "pprof", "-http", port, "-no_browser", fmt.Sprintf("%sprofile", homeUrl))
				currentCmd.Start()
				utils.OpenBrowser(view)
			case "Profile: Memory":
				if currentCmd != nil {
					err = currentCmd.Stop()
					if err != nil {
						log.Error().Err(err).Str("name", currentCmd.Name).Msg("Failed to stop process!")
					}
				}
				currentCmd = cmd.NewCmd("go", "tool", "pprof", "-http", port, "-no_browser", fmt.Sprintf("%sheap", homeUrl))
				currentCmd.Start()
				utils.OpenBrowser(view)
			case "Profile: Goroutine":
				if currentCmd != nil {
					err = currentCmd.Stop()
					if err != nil {
						log.Error().Err(err).Str("name", currentCmd.Name).Msg("Failed to stop process!")
					}
				}
				currentCmd = cmd.NewCmd("go", "tool", "pprof", "-http", port, "-no_browser", fmt.Sprintf("%sgoroutine", homeUrl))
				currentCmd.Start()
				utils.OpenBrowser(view)
			case "Cancel":
				if currentCmd != nil {
					err = currentCmd.Stop()
					if err != nil {
						log.Error().Err(err).Str("name", currentCmd.Name).Msg("Failed to stop process!")
					}
				}
				fallthrough
			default:
				app.SetRoot(list, fullscreen)
			}
		})
}
07070100000012000081A4000000000000000000000001689B9CB3000003E4000000000000000000000000000000000000002200000000kubeshark-cli-52.8.1/cmd/proxy.gopackage cmd

import (
	"github.com/creasty/defaults"
	"github.com/kubeshark/kubeshark/config/configStructs"
	"github.com/rs/zerolog/log"
	"github.com/spf13/cobra"
)

var proxyCmd = &cobra.Command{
	Use:   "proxy",
	Short: "Open the web UI (front-end) in the browser via proxy/port-forward",
	RunE: func(cmd *cobra.Command, args []string) error {
		runProxy(true, false)
		return nil
	},
}

func init() {
	rootCmd.AddCommand(proxyCmd)

	defaultTapConfig := configStructs.TapConfig{}
	if err := defaults.Set(&defaultTapConfig); err != nil {
		log.Debug().Err(err).Send()
	}

	proxyCmd.Flags().Uint16(configStructs.ProxyFrontPortLabel, defaultTapConfig.Proxy.Front.Port, "Provide a custom port for the proxy/port-forward")
	proxyCmd.Flags().String(configStructs.ProxyHostLabel, defaultTapConfig.Proxy.Host, "Provide a custom host for the proxy/port-forward")
	proxyCmd.Flags().StringP(configStructs.ReleaseNamespaceLabel, "s", defaultTapConfig.Release.Namespace, "Release namespace of Kubeshark")
}
07070100000013000081A4000000000000000000000001689B9CB300000B2F000000000000000000000000000000000000002800000000kubeshark-cli-52.8.1/cmd/proxyRunner.gopackage cmd

import (
	"context"
	"fmt"
	"net/http"

	"github.com/kubeshark/kubeshark/config"
	"github.com/kubeshark/kubeshark/config/configStructs"
	"github.com/kubeshark/kubeshark/internal/connect"
	"github.com/kubeshark/kubeshark/kubernetes"
	"github.com/kubeshark/kubeshark/misc"
	"github.com/kubeshark/kubeshark/utils"
	"github.com/rs/zerolog/log"
)

func runProxy(block bool, noBrowser bool) {
	kubernetesProvider, err := getKubernetesProviderForCli(false, false)
	if err != nil {
		return
	}

	ctx, cancel := context.WithCancel(context.Background())
	defer cancel()

	exists, err := kubernetesProvider.DoesServiceExist(ctx, config.Config.Tap.Release.Namespace, kubernetes.FrontServiceName)
	if err != nil {
		log.Error().
			Str("service", kubernetes.FrontServiceName).
			Err(err).
			Msg("Failed to found service!")
		cancel()
		return
	}

	if !exists {
		log.Error().
			Str("service", kubernetes.FrontServiceName).
			Str("command", fmt.Sprintf("%s %s", misc.Program, tapCmd.Use)).
			Msg("Service not found! You should run the command first:")
		cancel()
		return
	}

	exists, err = kubernetesProvider.DoesServiceExist(ctx, config.Config.Tap.Release.Namespace, kubernetes.HubServiceName)
	if err != nil {
		log.Error().
			Str("service", kubernetes.HubServiceName).
			Err(err).
			Msg("Failed to found service!")
		cancel()
		return
	}

	if !exists {
		log.Error().
			Str("service", kubernetes.HubServiceName).
			Str("command", fmt.Sprintf("%s %s", misc.Program, tapCmd.Use)).
			Msg("Service not found! You should run the command first:")
		cancel()
		return
	}

	var establishedProxy bool

	frontUrl := kubernetes.GetProxyOnPort(config.Config.Tap.Proxy.Front.Port)
	response, err := http.Get(fmt.Sprintf("%s/", frontUrl))
	if err == nil && response.StatusCode == 200 {
		log.Info().
			Str("service", kubernetes.FrontServiceName).
			Int("port", int(config.Config.Tap.Proxy.Front.Port)).
			Msg("Found a running service.")

		okToOpen("Kubeshark", frontUrl, noBrowser)
	} else {
		startProxyReportErrorIfAny(
			kubernetesProvider,
			ctx,
			kubernetes.FrontServiceName,
			kubernetes.FrontPodName,
			configStructs.ProxyFrontPortLabel,
			config.Config.Tap.Proxy.Front.Port,
			configStructs.ContainerPort,
			"",
		)
		connector := connect.NewConnector(frontUrl, connect.DefaultRetries, connect.DefaultTimeout)
		if err := connector.TestConnection(""); err != nil {
			log.Error().Msg(fmt.Sprintf(utils.Red, "Couldn't connect to Front."))
			return
		}

		establishedProxy = true
		okToOpen("Kubeshark", frontUrl, noBrowser)
	}
	if establishedProxy && block {
		utils.WaitForTermination(ctx, cancel)
	}

}

func okToOpen(name string, url string, noBrowser bool) {
	log.Info().Str("url", url).Msg(fmt.Sprintf(utils.Green, fmt.Sprintf("%s is available at:", name)))

	if !config.Config.HeadlessMode && !noBrowser {
		utils.OpenBrowser(url)
	}
}
07070100000014000081A4000000000000000000000001689B9CB300000566000000000000000000000000000000000000002100000000kubeshark-cli-52.8.1/cmd/root.gopackage cmd

import (
	"fmt"

	"github.com/creasty/defaults"
	"github.com/kubeshark/kubeshark/config"
	"github.com/kubeshark/kubeshark/misc"
	"github.com/rs/zerolog/log"
	"github.com/spf13/cobra"
)

var rootCmd = &cobra.Command{
	Use:   "kubeshark",
	Short: fmt.Sprintf("%s: %s", misc.Software, misc.Description),
	Long: fmt.Sprintf(`%s: %s
An extensible Kubernetes-aware network sniffer and kernel tracer.
For more info: %s`, misc.Software, misc.Description, misc.Website),
	PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
		if err := config.InitConfig(cmd); err != nil {
			log.Fatal().Err(err).Send()
		}

		return nil
	},
}

func init() {
	defaultConfig := config.CreateDefaultConfig()
	if err := defaults.Set(&defaultConfig); err != nil {
		log.Debug().Err(err).Send()
	}

	rootCmd.PersistentFlags().StringSlice(config.SetCommandName, []string{}, fmt.Sprintf("Override values using --%s", config.SetCommandName))
	rootCmd.PersistentFlags().BoolP(config.DebugFlag, "d", false, "Enable debug mode")
	rootCmd.PersistentFlags().String(config.ConfigPathFlag, "", fmt.Sprintf("Set the config path, default: %s", config.GetConfigFilePath(nil)))
}

// Execute adds all child commands to the root command and sets flags appropriately.
// This is called by main.main(). It only needs to happen once to the tapCmd.
func Execute() {
	cobra.CheckErr(rootCmd.Execute())
}
07070100000015000081A4000000000000000000000001689B9CB300002414000000000000000000000000000000000000002400000000kubeshark-cli-52.8.1/cmd/scripts.gopackage cmd

import (
	"context"
	"encoding/json"
	"errors"
	"os"
	"os/signal"
	"strings"
	"sync"
	"time"

	"github.com/creasty/defaults"
	"github.com/fsnotify/fsnotify"
	"github.com/kubeshark/kubeshark/config"
	"github.com/kubeshark/kubeshark/config/configStructs"
	"github.com/kubeshark/kubeshark/kubernetes"
	"github.com/kubeshark/kubeshark/misc"
	"github.com/rs/zerolog/log"
	"github.com/spf13/cobra"
	k8serrors "k8s.io/apimachinery/pkg/api/errors"
	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
	"k8s.io/apimachinery/pkg/watch"
)

var scriptsCmd = &cobra.Command{
	Use:   "scripts",
	Short: "Watch the `scripting.source` and/or `scripting.sources` folders for changes and update the scripts",
	RunE: func(cmd *cobra.Command, args []string) error {
		runScripts()
		return nil
	},
}

func init() {
	rootCmd.AddCommand(scriptsCmd)

	defaultTapConfig := configStructs.TapConfig{}
	if err := defaults.Set(&defaultTapConfig); err != nil {
		log.Debug().Err(err).Send()
	}

	scriptsCmd.Flags().Uint16(configStructs.ProxyFrontPortLabel, defaultTapConfig.Proxy.Front.Port, "Provide a custom port for the Kubeshark")
	scriptsCmd.Flags().String(configStructs.ProxyHostLabel, defaultTapConfig.Proxy.Host, "Provide a custom host for the Kubeshark")
	scriptsCmd.Flags().StringP(configStructs.ReleaseNamespaceLabel, "s", defaultTapConfig.Release.Namespace, "Release namespace of Kubeshark")
}

func runScripts() {
	if config.Config.Scripting.Source == "" && len(config.Config.Scripting.Sources) == 0 {
		log.Error().Msg("Both `scripting.source` and `scripting.sources` fields are empty.")
		return
	}

	kubernetesProvider, err := getKubernetesProviderForCli(false, false)
	if err != nil {
		log.Error().Err(err).Send()
		return
	}

	var wg sync.WaitGroup

	ctx, cancel := context.WithCancel(context.Background())
	defer cancel()

	signalChan := make(chan os.Signal, 1)
	signal.Notify(signalChan, os.Interrupt)

	wg.Add(1)
	go func() {
		defer wg.Done()
		watchConfigMap(ctx, kubernetesProvider)
	}()

	wg.Add(1)
	go func() {
		defer wg.Done()
		watchScripts(ctx, kubernetesProvider, true)
	}()

	go func() {
		<-signalChan
		log.Debug().Msg("Received interrupt, stopping watchers.")
		cancel()
	}()

	wg.Wait()

}

func createScript(provider *kubernetes.Provider, script misc.ConfigMapScript) (index int64, err error) {
	const maxRetries = 5
	var scripts map[int64]misc.ConfigMapScript

	for i := 0; i < maxRetries; i++ {
		scripts, err = kubernetes.ConfigGetScripts(provider)
		if err != nil {
			return
		}
		script.Active = kubernetes.IsActiveScript(provider, script.Title)
		index = 0
		if script.Title != "New Script" {
			for i, v := range scripts {
				if index <= i {
					index = i + 1
				}
				if v.Title == script.Title {
					index = int64(i)
				}
			}
		}
		scripts[index] = script

		log.Info().Str("title", script.Title).Bool("Active", script.Active).Int64("Index", index).Msg("Creating script")
		var data []byte
		data, err = json.Marshal(scripts)
		if err != nil {
			return
		}

		_, err = kubernetes.SetConfig(provider, kubernetes.CONFIG_SCRIPTING_SCRIPTS, string(data))
		if err == nil {
			return index, nil
		}

		if k8serrors.IsConflict(err) {
			log.Debug().Err(err).Msg("Conflict detected, retrying update...")
			time.Sleep(500 * time.Millisecond)
			continue
		}

		return 0, err
	}

	log.Error().Msg("Max retries reached for creating script due to conflicts.")
	return 0, errors.New("max retries reached due to conflicts while creating script")
}

func updateScript(provider *kubernetes.Provider, index int64, script misc.ConfigMapScript) (err error) {
	var scripts map[int64]misc.ConfigMapScript
	scripts, err = kubernetes.ConfigGetScripts(provider)
	if err != nil {
		return
	}
	script.Active = kubernetes.IsActiveScript(provider, script.Title)
	scripts[index] = script

	var data []byte
	data, err = json.Marshal(scripts)
	if err != nil {
		return
	}

	_, err = kubernetes.SetConfig(provider, kubernetes.CONFIG_SCRIPTING_SCRIPTS, string(data))
	if err != nil {
		return
	}

	return
}

func deleteScript(provider *kubernetes.Provider, index int64) (err error) {
	var scripts map[int64]misc.ConfigMapScript
	scripts, err = kubernetes.ConfigGetScripts(provider)
	if err != nil {
		return
	}
	err = kubernetes.DeleteActiveScriptByTitle(provider, scripts[index].Title)
	if err != nil {
		return
	}
	delete(scripts, index)

	var data []byte
	data, err = json.Marshal(scripts)
	if err != nil {
		return
	}

	_, err = kubernetes.SetConfig(provider, kubernetes.CONFIG_SCRIPTING_SCRIPTS, string(data))
	if err != nil {
		return
	}

	return
}

func watchScripts(ctx context.Context, provider *kubernetes.Provider, block bool) {
	files := make(map[string]int64)

	scripts, err := config.Config.Scripting.GetScripts()
	if err != nil {
		log.Error().Err(err).Send()
		return
	}

	for _, script := range scripts {
		index, err := createScript(provider, script.ConfigMap())
		if err != nil {
			log.Error().Err(err).Send()
			continue
		}

		files[script.Path] = index
	}

	watcher, err := fsnotify.NewWatcher()
	if err != nil {
		log.Error().Err(err).Send()
		return
	}
	if block {
		defer watcher.Close()
	}

	ctx, cancel := context.WithCancel(ctx)
	defer cancel()

	signalChan := make(chan os.Signal, 1)
	signal.Notify(signalChan, os.Interrupt)

	go func() {
		<-signalChan
		log.Debug().Msg("Received interrupt, stopping script watch.")
		cancel()
		watcher.Close()
	}()

	if err := watcher.Add(config.Config.Scripting.Source); err != nil {
		log.Error().Err(err).Msg("Failed to add scripting source to watcher")
		return
	}

	go func() {
		for {
			select {
			case <-ctx.Done():
				log.Debug().Msg("Script watcher exiting gracefully.")
				return

			// watch for events
			case event := <-watcher.Events:
				if !strings.HasSuffix(event.Name, "js") {
					log.Info().Str("file", event.Name).Msg("Ignoring file")
					continue
				}
				switch event.Op {
				case fsnotify.Create:
					script, err := misc.ReadScriptFile(event.Name)
					if err != nil {
						log.Error().Err(err).Send()
						continue
					}

					index, err := createScript(provider, script.ConfigMap())
					if err != nil {
						log.Error().Err(err).Send()
						continue
					}

					files[script.Path] = index

				case fsnotify.Write:
					index := files[event.Name]
					script, err := misc.ReadScriptFile(event.Name)
					if err != nil {
						log.Error().Err(err).Send()
						continue
					}

					err = updateScript(provider, index, script.ConfigMap())
					if err != nil {
						log.Error().Err(err).Send()
						continue
					}

				case fsnotify.Rename:
					index := files[event.Name]
					err := deleteScript(provider, index)
					if err != nil {
						log.Error().Err(err).Send()
						continue
					}

				default:
					// pass
				}

			case err, ok := <-watcher.Errors:
				if !ok {
					log.Info().Msg("Watcher errors channel closed.")
					return
				}
				log.Error().Err(err).Msg("Watcher error encountered")
			}
		}
	}()

	if err := watcher.Add(config.Config.Scripting.Source); err != nil {
		log.Error().Err(err).Send()
	}

	for _, source := range config.Config.Scripting.Sources {
		if err := watcher.Add(source); err != nil {
			log.Error().Err(err).Send()
		}
	}

	log.Info().Str("folder", config.Config.Scripting.Source).Interface("folders", config.Config.Scripting.Sources).Msg("Watching scripts against changes:")

	if block {
		<-ctx.Done()
	}
}

func watchConfigMap(ctx context.Context, provider *kubernetes.Provider) {
	clientset := provider.GetClientSet()
	configMapName := kubernetes.SELF_RESOURCES_PREFIX + kubernetes.SUFFIX_CONFIG_MAP

	for {
		select {
		case <-ctx.Done():
			log.Info().Msg("ConfigMap watcher exiting gracefully.")
			return

		default:
			watcher, err := clientset.CoreV1().ConfigMaps(config.Config.Tap.Release.Namespace).Watch(context.TODO(), metav1.ListOptions{
				FieldSelector: "metadata.name=" + configMapName,
			})
			if err != nil {
				log.Warn().Err(err).Msg("ConfigMap not found, retrying in 5 seconds...")
				time.Sleep(5 * time.Second)
				continue
			}

			// Create a goroutine to process events
			watcherClosed := make(chan struct{})
			go func() {
				defer close(watcherClosed)
				for event := range watcher.ResultChan() {
					if event.Type == watch.Added {
						log.Info().Msg("ConfigMap created or modified")
						runScriptsSync(provider)
					} else if event.Type == watch.Deleted {
						log.Warn().Msg("ConfigMap deleted, waiting for recreation...")
						break
					}
				}
			}()

			// Wait for either context cancellation or watcher completion
			select {
			case <-ctx.Done():
				watcher.Stop()
				log.Info().Msg("ConfigMap watcher stopping due to context cancellation")
				return
			case <-watcherClosed:
				log.Info().Msg("Watcher closed, restarting...")
			}

			time.Sleep(5 * time.Second)
		}
	}
}

func runScriptsSync(provider *kubernetes.Provider) {
	files := make(map[string]int64)

	scripts, err := config.Config.Scripting.GetScripts()
	if err != nil {
		log.Error().Err(err).Send()
		return
	}

	for _, script := range scripts {
		index, err := createScript(provider, script.ConfigMap())
		if err != nil {
			log.Error().Err(err).Send()
			continue
		}
		files[script.Path] = index
	}
	log.Info().Msg("Synchronized scripts with ConfigMap.")
}
07070100000016000081A4000000000000000000000001689B9CB300000F39000000000000000000000000000000000000002000000000kubeshark-cli-52.8.1/cmd/tap.gopackage cmd

import (
	"errors"

	"github.com/creasty/defaults"
	"github.com/kubeshark/kubeshark/config"
	"github.com/kubeshark/kubeshark/config/configStructs"
	"github.com/kubeshark/kubeshark/errormessage"
	"github.com/rs/zerolog/log"
	"github.com/spf13/cobra"
)

var tapCmd = &cobra.Command{
	Use:   "tap [POD REGEX]",
	Short: "Capture the network traffic in your Kubernetes cluster",
	RunE: func(cmd *cobra.Command, args []string) error {
		tap()
		return nil
	},
	PreRunE: func(cmd *cobra.Command, args []string) error {
		if len(args) == 1 {
			config.Config.Tap.PodRegexStr = args[0]
		} else if len(args) > 1 {
			return errors.New("unexpected number of arguments")
		}

		if err := config.Config.Tap.Validate(); err != nil {
			return errormessage.FormatError(err)
		}

		return nil
	},
}

func init() {
	rootCmd.AddCommand(tapCmd)

	defaultTapConfig := configStructs.TapConfig{}
	if err := defaults.Set(&defaultTapConfig); err != nil {
		log.Debug().Err(err).Send()
	}

	tapCmd.Flags().StringP(configStructs.DockerRegistryLabel, "r", defaultTapConfig.Docker.Registry, "The Docker registry that's hosting the images")
	tapCmd.Flags().StringP(configStructs.DockerTagLabel, "t", defaultTapConfig.Docker.Tag, "The tag of the Docker images that are going to be pulled")
	tapCmd.Flags().String(configStructs.DockerImagePullPolicy, defaultTapConfig.Docker.ImagePullPolicy, "ImagePullPolicy for the Docker images")
	tapCmd.Flags().StringSlice(configStructs.DockerImagePullSecrets, defaultTapConfig.Docker.ImagePullSecrets, "ImagePullSecrets for the Docker images")
	tapCmd.Flags().Uint16(configStructs.ProxyFrontPortLabel, defaultTapConfig.Proxy.Front.Port, "Provide a custom port for the proxy/port-forward")
	tapCmd.Flags().String(configStructs.ProxyHostLabel, defaultTapConfig.Proxy.Host, "Provide a custom host for the proxy/port-forward")
	tapCmd.Flags().StringSliceP(configStructs.NamespacesLabel, "n", defaultTapConfig.Namespaces, "Namespaces selector")
	tapCmd.Flags().StringSliceP(configStructs.ExcludedNamespacesLabel, "e", defaultTapConfig.ExcludedNamespaces, "Excluded namespaces")
	tapCmd.Flags().StringP(configStructs.ReleaseNamespaceLabel, "s", defaultTapConfig.Release.Namespace, "Release namespace of Kubeshark")
	tapCmd.Flags().Bool(configStructs.PersistentStorageLabel, defaultTapConfig.PersistentStorage, "Enable persistent storage (PersistentVolumeClaim)")
	tapCmd.Flags().Bool(configStructs.PersistentStorageStaticLabel, defaultTapConfig.PersistentStorageStatic, "Persistent storage static provision")
	tapCmd.Flags().String(configStructs.EfsFileSytemIdAndPathLabel, defaultTapConfig.EfsFileSytemIdAndPath, "EFS file system ID")
	tapCmd.Flags().String(configStructs.StorageLimitLabel, defaultTapConfig.StorageLimit, "Override the default storage limit (per node)")
	tapCmd.Flags().String(configStructs.StorageClassLabel, defaultTapConfig.StorageClass, "Override the default storage class of the PersistentVolumeClaim (per node)")
	tapCmd.Flags().Bool(configStructs.DryRunLabel, defaultTapConfig.DryRun, "Preview of all pods matching the regex, without tapping them")
	tapCmd.Flags().Bool(configStructs.ServiceMeshLabel, defaultTapConfig.ServiceMesh, "Capture the encrypted traffic if the cluster is configured with a service mesh and with mTLS")
	tapCmd.Flags().Bool(configStructs.TlsLabel, defaultTapConfig.Tls, "Capture the traffic that's encrypted with OpenSSL or Go crypto/tls libraries")
	tapCmd.Flags().Bool(configStructs.IngressEnabledLabel, defaultTapConfig.Ingress.Enabled, "Enable Ingress")
	tapCmd.Flags().Bool(configStructs.TelemetryEnabledLabel, defaultTapConfig.Telemetry.Enabled, "Enable/disable Telemetry")
	tapCmd.Flags().Bool(configStructs.ResourceGuardEnabledLabel, defaultTapConfig.ResourceGuard.Enabled, "Enable/disable resource guard")
	tapCmd.Flags().Bool(configStructs.WatchdogEnabled, defaultTapConfig.Watchdog.Enabled, "Enable/disable watchdog")
}
07070100000017000081A4000000000000000000000001689B9CB30000362A000000000000000000000000000000000000002600000000kubeshark-cli-52.8.1/cmd/tapRunner.gopackage cmd

import (
	"context"
	"encoding/json"
	"fmt"
	"os"
	"regexp"
	"strings"
	"sync"
	"time"

	"github.com/kubeshark/kubeshark/kubernetes/helm"
	"github.com/kubeshark/kubeshark/misc"
	"github.com/kubeshark/kubeshark/utils"

	core "k8s.io/api/core/v1"

	"github.com/kubeshark/kubeshark/config"
	"github.com/kubeshark/kubeshark/config/configStructs"
	"github.com/kubeshark/kubeshark/errormessage"
	"github.com/kubeshark/kubeshark/kubernetes"
	"github.com/rs/zerolog/log"
)

const cleanupTimeout = time.Minute

type tapState struct {
	startTime        time.Time
	targetNamespaces []string
}

var state tapState

type Readiness struct {
	Hub   bool
	Front bool
	Proxy bool
	sync.Mutex
}

var ready *Readiness

func tap() {
	ready = &Readiness{}
	state.startTime = time.Now()
	log.Info().Str("registry", config.Config.Tap.Docker.Registry).Str("tag", config.Config.Tap.Docker.Tag).Msg("Using Docker:")

	log.Info().
		Str("limit", config.Config.Tap.StorageLimit).
		Msg(fmt.Sprintf("%s will store the traffic up to a limit (per node). Oldest TCP/UDP streams will be removed once the limit is reached.", misc.Software))

	kubernetesProvider, err := getKubernetesProviderForCli(false, false)
	if err != nil {
		log.Error().Err(err).Send()
		return
	}

	ctx, cancel := context.WithCancel(context.Background())
	defer cancel() // cancel will be called when this function exits

	state.targetNamespaces = kubernetesProvider.GetNamespaces()

	log.Info().
		Bool("enabled", config.Config.Tap.Telemetry.Enabled).
		Str("notice", "Telemetry can be disabled by setting the flag: --telemetry-enabled=false").
		Msg("Telemetry")

	log.Info().Strs("namespaces", state.targetNamespaces).Msg("Targeting pods in:")

	if err := printTargetedPodsPreview(ctx, kubernetesProvider, state.targetNamespaces); err != nil {
		log.Error().Err(errormessage.FormatError(err)).Msg("Error listing pods!")
	}

	if config.Config.Tap.DryRun {
		return
	}

	log.Info().Msg(fmt.Sprintf("Waiting for the creation of %s resources...", misc.Software))

	rel, err := helm.NewHelm(
		config.Config.Tap.Release.Repo,
		config.Config.Tap.Release.Name,
		config.Config.Tap.Release.Namespace,
	).Install()
	if err != nil {
		if err.Error() != "cannot re-use a name that is still in use" {
			log.Error().Err(err).Send()
			os.Exit(1)
		}
		log.Info().Msg("Found an existing installation, skipping Helm install...")

		updateConfig(kubernetesProvider)
		postFrontStarted(ctx, kubernetesProvider, cancel)
	} else {
		log.Info().Msgf("Installed the Helm release: %s", rel.Name)

		go watchHubEvents(ctx, kubernetesProvider, cancel)
		go watchHubPod(ctx, kubernetesProvider, cancel)
		go watchFrontPod(ctx, kubernetesProvider, cancel)
	}

	defer finishTapExecution(kubernetesProvider)

	// block until exit signal or error
	utils.WaitForTermination(ctx, cancel)

	if !config.Config.Tap.Ingress.Enabled {
		printProxyCommandSuggestion()
	}
}

func printProxyCommandSuggestion() {
	log.Warn().
		Str("command", fmt.Sprintf("%s proxy", misc.Program)).
		Msg(fmt.Sprintf(utils.Yellow, "To re-establish a proxy/port-forward, run:"))
}

func finishTapExecution(kubernetesProvider *kubernetes.Provider) {
	finishSelfExecution(kubernetesProvider)
}

/*
This function is a bit problematic as it might be detached from the actual pods the Kubeshark that targets.
The alternative would be to wait for Hub to be ready and then query it for the pods it listens to, this has
the arguably worse drawback of taking a relatively very long time before the user sees which pods are targeted, if any.
*/
func printTargetedPodsPreview(ctx context.Context, kubernetesProvider *kubernetes.Provider, namespaces []string) error {
	if matchingPods, err := kubernetesProvider.ListAllRunningPodsMatchingRegex(ctx, config.Config.Tap.PodRegex(), namespaces); err != nil {
		return err
	} else {
		if len(matchingPods) == 0 {
			printNoPodsFoundSuggestion(namespaces)
		}
		for _, targetedPod := range matchingPods {
			log.Info().Msg(fmt.Sprintf("Targeted pod: %s", fmt.Sprintf(utils.Green, targetedPod.Name)))
		}
		return nil
	}
}

func printNoPodsFoundSuggestion(targetNamespaces []string) {
	var suggestionStr string
	if !utils.Contains(targetNamespaces, kubernetes.K8sAllNamespaces) {
		suggestionStr = ". You can also try selecting a different namespace with -n or target all namespaces with -A"
	}
	log.Warn().Msg(fmt.Sprintf("Did not find any currently running pods that match the regex argument, %s will automatically target matching pods if any are created later%s", misc.Software, suggestionStr))
}

func watchHubPod(ctx context.Context, kubernetesProvider *kubernetes.Provider, cancel context.CancelFunc) {
	podExactRegex := regexp.MustCompile(fmt.Sprintf("^%s", kubernetes.HubPodName))
	podWatchHelper := kubernetes.NewPodWatchHelper(kubernetesProvider, podExactRegex)
	eventChan, errorChan := kubernetes.FilteredWatch(ctx, podWatchHelper, []string{config.Config.Tap.Release.Namespace}, podWatchHelper)
	isPodReady := false

	timeAfter := time.After(120 * time.Second)
	for {
		select {
		case wEvent, ok := <-eventChan:
			if !ok {
				eventChan = nil
				continue
			}

			switch wEvent.Type {
			case kubernetes.EventAdded:
				log.Info().Str("pod", kubernetes.HubPodName).Msg("Added:")
			case kubernetes.EventDeleted:
				log.Info().Str("pod", kubernetes.HubPodName).Msg("Removed:")
				cancel()
				return
			case kubernetes.EventModified:
				modifiedPod, err := wEvent.ToPod()
				if err != nil {
					log.Error().Str("pod", kubernetes.HubPodName).Err(err).Msg("While watching pod.")
					cancel()
					continue
				}

				log.Debug().
					Str("pod", kubernetes.HubPodName).
					Interface("phase", modifiedPod.Status.Phase).
					Interface("containers-statuses", modifiedPod.Status.ContainerStatuses).
					Msg("Watching pod.")

				if modifiedPod.Status.Phase == core.PodRunning && !isPodReady {
					isPodReady = true

					ready.Lock()
					ready.Hub = true
					ready.Unlock()
					log.Info().Str("pod", kubernetes.HubPodName).Msg("Ready.")
				}

				ready.Lock()
				proxyDone := ready.Proxy
				hubPodReady := ready.Hub
				frontPodReady := ready.Front
				ready.Unlock()

				if !proxyDone && hubPodReady && frontPodReady {
					ready.Lock()
					ready.Proxy = true
					ready.Unlock()
					postFrontStarted(ctx, kubernetesProvider, cancel)
				}
			case kubernetes.EventBookmark:
				break
			case kubernetes.EventError:
				break
			}
		case err, ok := <-errorChan:
			if !ok {
				errorChan = nil
				continue
			}

			log.Error().
				Str("pod", kubernetes.HubPodName).
				Str("namespace", config.Config.Tap.Release.Namespace).
				Err(err).
				Msg("Failed creating pod.")
			cancel()

		case <-timeAfter:
			if !isPodReady {
				log.Error().
					Str("pod", kubernetes.HubPodName).
					Msg("Pod was not ready in time.")
				cancel()
			}
		case <-ctx.Done():
			log.Debug().
				Str("pod", kubernetes.HubPodName).
				Msg("Watching pod, context done.")
			return
		}
	}
}

func watchFrontPod(ctx context.Context, kubernetesProvider *kubernetes.Provider, cancel context.CancelFunc) {
	podExactRegex := regexp.MustCompile(fmt.Sprintf("^%s", kubernetes.FrontPodName))
	podWatchHelper := kubernetes.NewPodWatchHelper(kubernetesProvider, podExactRegex)
	eventChan, errorChan := kubernetes.FilteredWatch(ctx, podWatchHelper, []string{config.Config.Tap.Release.Namespace}, podWatchHelper)
	isPodReady := false

	timeAfter := time.After(120 * time.Second)
	for {
		select {
		case wEvent, ok := <-eventChan:
			if !ok {
				eventChan = nil
				continue
			}

			switch wEvent.Type {
			case kubernetes.EventAdded:
				log.Info().Str("pod", kubernetes.FrontPodName).Msg("Added:")
			case kubernetes.EventDeleted:
				log.Info().Str("pod", kubernetes.FrontPodName).Msg("Removed:")
				cancel()
				return
			case kubernetes.EventModified:
				modifiedPod, err := wEvent.ToPod()
				if err != nil {
					log.Error().Str("pod", kubernetes.FrontPodName).Err(err).Msg("While watching pod.")
					cancel()
					continue
				}

				log.Debug().
					Str("pod", kubernetes.FrontPodName).
					Interface("phase", modifiedPod.Status.Phase).
					Interface("containers-statuses", modifiedPod.Status.ContainerStatuses).
					Msg("Watching pod.")

				if modifiedPod.Status.Phase == core.PodRunning && !isPodReady {
					isPodReady = true
					ready.Lock()
					ready.Front = true
					ready.Unlock()
					log.Info().Str("pod", kubernetes.FrontPodName).Msg("Ready.")
				}

				ready.Lock()
				proxyDone := ready.Proxy
				hubPodReady := ready.Hub
				frontPodReady := ready.Front
				ready.Unlock()

				if !proxyDone && hubPodReady && frontPodReady {
					ready.Lock()
					ready.Proxy = true
					ready.Unlock()
					postFrontStarted(ctx, kubernetesProvider, cancel)
				}
			case kubernetes.EventBookmark:
				break
			case kubernetes.EventError:
				break
			}
		case err, ok := <-errorChan:
			if !ok {
				errorChan = nil
				continue
			}

			log.Error().
				Str("pod", kubernetes.FrontPodName).
				Str("namespace", config.Config.Tap.Release.Namespace).
				Err(err).
				Msg("Failed creating pod.")

		case <-timeAfter:
			if !isPodReady {
				log.Error().
					Str("pod", kubernetes.FrontPodName).
					Msg("Pod was not ready in time.")
				cancel()
			}
		case <-ctx.Done():
			log.Debug().
				Str("pod", kubernetes.FrontPodName).
				Msg("Watching pod, context done.")
			return
		}
	}
}

func watchHubEvents(ctx context.Context, kubernetesProvider *kubernetes.Provider, cancel context.CancelFunc) {
	podExactRegex := regexp.MustCompile(fmt.Sprintf("^%s", kubernetes.HubPodName))
	eventWatchHelper := kubernetes.NewEventWatchHelper(kubernetesProvider, podExactRegex, "pod")
	eventChan, errorChan := kubernetes.FilteredWatch(ctx, eventWatchHelper, []string{config.Config.Tap.Release.Namespace}, eventWatchHelper)
	for {
		select {
		case wEvent, ok := <-eventChan:
			if !ok {
				eventChan = nil
				continue
			}

			event, err := wEvent.ToEvent()
			if err != nil {
				log.Error().
					Str("pod", kubernetes.HubPodName).
					Err(err).
					Msg("Parsing resource event.")
				continue
			}

			if state.startTime.After(event.CreationTimestamp.Time) {
				continue
			}

			log.Debug().
				Str("pod", kubernetes.HubPodName).
				Str("event", event.Name).
				Time("time", event.CreationTimestamp.Time).
				Str("name", event.Regarding.Name).
				Str("kind", event.Regarding.Kind).
				Str("reason", event.Reason).
				Str("note", event.Note).
				Msg("Watching events.")

			switch event.Reason {
			case "FailedScheduling", "Failed":
				log.Error().
					Str("pod", kubernetes.HubPodName).
					Str("event", event.Name).
					Time("time", event.CreationTimestamp.Time).
					Str("name", event.Regarding.Name).
					Str("kind", event.Regarding.Kind).
					Str("reason", event.Reason).
					Str("note", event.Note).
					Msg("Watching events.")
				cancel()

			}
		case err, ok := <-errorChan:
			if !ok {
				errorChan = nil
				continue
			}

			log.Error().
				Str("pod", kubernetes.HubPodName).
				Err(err).
				Msg("While watching events.")

		case <-ctx.Done():
			log.Debug().
				Str("pod", kubernetes.HubPodName).
				Msg("Watching pod events, context done.")
			return
		}
	}
}

func postFrontStarted(ctx context.Context, kubernetesProvider *kubernetes.Provider, cancel context.CancelFunc) {
	startProxyReportErrorIfAny(
		kubernetesProvider,
		ctx,
		kubernetes.FrontServiceName,
		kubernetes.FrontPodName,
		configStructs.ProxyFrontPortLabel,
		config.Config.Tap.Proxy.Front.Port,
		configStructs.ContainerPort,
		"",
	)

	var url string
	if config.Config.Tap.Ingress.Enabled {
		url = fmt.Sprintf("http://%s", config.Config.Tap.Ingress.Host)
	} else {
		url = kubernetes.GetProxyOnPort(config.Config.Tap.Proxy.Front.Port)
	}
	log.Info().Str("url", url).Msg(fmt.Sprintf(utils.Green, fmt.Sprintf("%s is available at:", misc.Software)))

	if !config.Config.HeadlessMode {
		utils.OpenBrowser(url)
	}

	for !ready.Hub {
		time.Sleep(100 * time.Millisecond)
	}


	if (config.Config.Scripting.Source != "" || len(config.Config.Scripting.Sources) > 0) && config.Config.Scripting.WatchScripts {
		watchScripts(ctx, kubernetesProvider, false)
	}

	if config.Config.Scripting.Console {
		go runConsoleWithoutProxy()
	}
}

func updateConfig(kubernetesProvider *kubernetes.Provider) {
	_, _ = kubernetes.SetSecret(kubernetesProvider, kubernetes.SECRET_LICENSE, config.Config.License)
	_, _ = kubernetes.SetConfig(kubernetesProvider, kubernetes.CONFIG_POD_REGEX, config.Config.Tap.PodRegexStr)
	_, _ = kubernetes.SetConfig(kubernetesProvider, kubernetes.CONFIG_NAMESPACES, strings.Join(config.Config.Tap.Namespaces, ","))
	_, _ = kubernetes.SetConfig(kubernetesProvider, kubernetes.CONFIG_EXCLUDED_NAMESPACES, strings.Join(config.Config.Tap.ExcludedNamespaces, ","))

	data, err := json.Marshal(config.Config.Scripting.Env)
	if err != nil {
		log.Error().Str("config", kubernetes.CONFIG_SCRIPTING_ENV).Err(err).Send()
		return
	} else {
		_, _ = kubernetes.SetConfig(kubernetesProvider, kubernetes.CONFIG_SCRIPTING_ENV, string(data))
	}

	ingressEnabled := ""
	if config.Config.Tap.Ingress.Enabled {
		ingressEnabled = "true"
	}

	authEnabled := ""
	if config.Config.Tap.Auth.Enabled {
		authEnabled = "true"
	}

	_, _ = kubernetes.SetConfig(kubernetesProvider, kubernetes.CONFIG_INGRESS_ENABLED, ingressEnabled)
	_, _ = kubernetes.SetConfig(kubernetesProvider, kubernetes.CONFIG_INGRESS_HOST, config.Config.Tap.Ingress.Host)

	_, _ = kubernetes.SetConfig(kubernetesProvider, kubernetes.CONFIG_PROXY_FRONT_PORT, fmt.Sprint(config.Config.Tap.Proxy.Front.Port))

	_, _ = kubernetes.SetConfig(kubernetesProvider, kubernetes.CONFIG_AUTH_ENABLED, authEnabled)
	_, _ = kubernetes.SetConfig(kubernetesProvider, kubernetes.CONFIG_AUTH_TYPE, config.Config.Tap.Auth.Type)
	_, _ = kubernetes.SetConfig(kubernetesProvider, kubernetes.CONFIG_AUTH_SAML_IDP_METADATA_URL, config.Config.Tap.Auth.Saml.IdpMetadataUrl)
}
07070100000018000081A4000000000000000000000001689B9CB3000002C2000000000000000000000000000000000000002400000000kubeshark-cli-52.8.1/cmd/version.gopackage cmd

import (
	"fmt"
	"strconv"
	"time"

	"github.com/kubeshark/kubeshark/config"
	"github.com/kubeshark/kubeshark/misc"
	"github.com/rs/zerolog/log"
	"github.com/spf13/cobra"
)

var versionCmd = &cobra.Command{
	Use:   "version",
	Short: "Print version info",
	RunE: func(cmd *cobra.Command, args []string) error {
		timeStampInt, _ := strconv.ParseInt(misc.BuildTimestamp, 10, 0)
		if config.DebugMode {
			log.Info().
				Str("version", misc.Ver).
				Str("branch", misc.Branch).
				Str("commit-hash", misc.GitCommitHash).
				Time("build-time", time.Unix(timeStampInt, 0)).
				Send()
		} else {
			fmt.Println(misc.Ver)
		}
		return nil
	},
}

func init() {
	rootCmd.AddCommand(versionCmd)
}
07070100000019000081A4000000000000000000000001689B9CB300000074000000000000000000000000000000000000002100000000kubeshark-cli-52.8.1/codecov.ymlcoverage:
  status:
    project:
      default:
        threshold: 1%
    patch:
      default:
        enabled: no
0707010000001A000041ED000000000000000000000002689B9CB300000000000000000000000000000000000000000000001C00000000kubeshark-cli-52.8.1/config0707010000001B000081A4000000000000000000000001689B9CB300002D05000000000000000000000000000000000000002600000000kubeshark-cli-52.8.1/config/config.gopackage config

import (
	"errors"
	"fmt"
	"io"
	"os"
	"path"
	"path/filepath"
	"reflect"
	"strconv"
	"strings"

	"github.com/creasty/defaults"
	"github.com/goccy/go-yaml"
	"github.com/kubeshark/kubeshark/misc"
	"github.com/kubeshark/kubeshark/misc/version"
	"github.com/kubeshark/kubeshark/utils"
	"github.com/rs/zerolog"
	"github.com/rs/zerolog/log"
	"github.com/spf13/cobra"
	"github.com/spf13/pflag"
)

const (
	Separator      = "="
	SetCommandName = "set"
	FieldNameTag   = "yaml"
	ReadonlyTag    = "readonly"
	DebugFlag      = "debug"
	ConfigPathFlag = "config-path"
)

var (
	Config         ConfigStruct
	DebugMode      bool
	cmdName        string
	ConfigFilePath string
)

func InitConfig(cmd *cobra.Command) error {
	var err error
	DebugMode, err = cmd.Flags().GetBool(DebugFlag)
	if err != nil {
		log.Error().Err(err).Msg(fmt.Sprintf("Can't receive '%s' flag", DebugFlag))
	}

	if DebugMode {
		zerolog.SetGlobalLevel(zerolog.DebugLevel)
	}

	if cmd.Use == "version" {
		return nil
	}

	if !utils.Contains([]string{
		"console",
		"pro",
		"manifests",
		"license",
	}, cmd.Use) {
		go version.CheckNewerVersion()
	}

	Config = CreateDefaultConfig()
	Config.Tap.Debug = DebugMode
	if DebugMode {
		Config.LogLevel = "debug"
	}
	cmdName = cmd.Name()
	if utils.Contains([]string{
		"clean",
		"console",
		"pro",
		"proxy",
		"scripts",
		"pprof",
	}, cmdName) {
		cmdName = "tap"
	}

	if err := defaults.Set(&Config); err != nil {
		return err
	}

	ConfigFilePath = GetConfigFilePath(cmd)
	if err := loadConfigFile(&Config, utils.Contains([]string{
		"manifests",
		"license",
	}, cmd.Use)); err != nil {
		if !os.IsNotExist(err) {
			return fmt.Errorf("invalid config, %w\n"+
				"you can regenerate the file by removing it (%v) and using `kubeshark config -r`", err, ConfigFilePath)
		}
	}

	cmd.Flags().Visit(initFlag)

	log.Debug().Interface("config", Config).Msg("Init config is finished.")

	return nil
}

func GetConfigWithDefaults() (*ConfigStruct, error) {
	defaultConf := ConfigStruct{}
	if err := defaults.Set(&defaultConf); err != nil {
		return nil, err
	}

	configElem := reflect.ValueOf(&defaultConf).Elem()
	setZeroForReadonlyFields(configElem)

	return &defaultConf, nil
}

func WriteConfig(config *ConfigStruct) error {
	template, err := utils.PrettyYaml(config)
	if err != nil {
		return fmt.Errorf("failed converting config to yaml, err: %v", err)
	}

	data := []byte(template)

	if _, err := os.Stat(ConfigFilePath); os.IsNotExist(err) {
		err = os.MkdirAll(filepath.Dir(ConfigFilePath), 0700)
		if err != nil {
			return fmt.Errorf("failed creating directories, err: %v", err)
		}
	}

	if err := os.WriteFile(ConfigFilePath, data, 0644); err != nil {
		return fmt.Errorf("failed writing config, err: %v", err)
	}

	return nil
}

func GetConfigFilePath(cmd *cobra.Command) string {
	defaultConfigPath := path.Join(misc.GetDotFolderPath(), "config.yaml")

	cwd, err := os.Getwd()
	if err != nil {
		return defaultConfigPath
	}

	if cmd != nil {
		configPathOverride, err := cmd.Flags().GetString(ConfigPathFlag)
		if err == nil {
			if configPathOverride != "" {
				resolvedConfigPath, err := filepath.Abs(configPathOverride)
				if err != nil {
					log.Error().Err(err).Msg("--config-path flag path cannot be resolved")
				} else {
					return resolvedConfigPath
				}
			}
		} else {
			log.Error().Err(err).Msg("--config-path flag parser error")
		}
	}

	cwdConfig := filepath.Join(cwd, fmt.Sprintf("%s.yaml", misc.Program))
	reader, err := os.Open(cwdConfig)
	if err != nil {
		return defaultConfigPath
	} else {
		reader.Close()
		return cwdConfig
	}
}

func loadConfigFile(config *ConfigStruct, silent bool) error {
	reader, err := os.Open(ConfigFilePath)
	if err != nil {
		return err
	}
	defer reader.Close()

	buf, err := io.ReadAll(reader)
	if err != nil {
		return err
	}

	if err := yaml.Unmarshal(buf, config); err != nil {
		return err
	}

	if !silent {
		log.Info().Str("path", ConfigFilePath).Msg("Found config file!")
	}

	return nil
}

func initFlag(f *pflag.Flag) {
	configElemValue := reflect.ValueOf(&Config).Elem()

	var flagPath []string
	flagPath = append(flagPath, cmdName)

	flagPath = append(flagPath, strings.Split(f.Name, "-")...)

	flagPathJoined := strings.Join(flagPath, ".")
	if strings.HasSuffix(flagPathJoined, ".config.path") {
		return
	}

	sliceValue, isSliceValue := f.Value.(pflag.SliceValue)
	if !isSliceValue {
		if err := mergeFlagValue(configElemValue, flagPath, flagPathJoined, f.Value.String()); err != nil {
			log.Warn().Err(err).Send()
		}
		return
	}

	if f.Name == SetCommandName {
		if err := mergeSetFlag(configElemValue, sliceValue.GetSlice()); err != nil {
			log.Warn().Err(err).Send()
		}
		return
	}

	if err := mergeFlagValues(configElemValue, flagPath, flagPathJoined, sliceValue.GetSlice()); err != nil {
		log.Warn().Err(err).Send()
	}
}

func mergeSetFlag(configElemValue reflect.Value, setValues []string) error {
	var setErrors []string
	setMap := map[string][]string{}

	for _, setValue := range setValues {
		if !strings.Contains(setValue, Separator) {
			setErrors = append(setErrors, fmt.Sprintf("Ignoring set argument %s (set argument format: <flag name>=<flag value>)", setValue))
			continue
		}

		split := strings.SplitN(setValue, Separator, 2)
		argumentKey, argumentValue := split[0], split[1]

		setMap[argumentKey] = append(setMap[argumentKey], argumentValue)
	}

	for argumentKey, argumentValues := range setMap {
		flagPath := strings.Split(argumentKey, ".")

		if len(argumentValues) > 1 {
			if err := mergeFlagValues(configElemValue, flagPath, argumentKey, argumentValues); err != nil {
				setErrors = append(setErrors, fmt.Sprintf("%v", err))
			}
		} else {
			if err := mergeFlagValue(configElemValue, flagPath, argumentKey, argumentValues[0]); err != nil {
				setErrors = append(setErrors, fmt.Sprintf("%v", err))
			}
		}
	}

	if len(setErrors) > 0 {
		return errors.New(strings.Join(setErrors, "\n"))
	}

	return nil
}

func mergeFlagValue(configElemValue reflect.Value, flagPath []string, fullFlagName string, flagValue string) error {
	mergeFunction := func(flagName string, currentFieldStruct reflect.StructField, currentFieldElemValue reflect.Value, currentElemValue reflect.Value) error {
		currentFieldKind := currentFieldStruct.Type.Kind()

		if currentFieldKind == reflect.Slice {
			return mergeFlagValues(currentElemValue, []string{flagName}, fullFlagName, []string{flagValue})
		}

		parsedValue, err := getParsedValue(currentFieldKind, flagValue)
		if err != nil {
			return fmt.Errorf("invalid value %s for flag name %s, expected %s", flagValue, flagName, currentFieldKind)
		}

		currentFieldElemValue.Set(parsedValue)
		return nil
	}

	return mergeFlag(configElemValue, flagPath, fullFlagName, mergeFunction)
}

func mergeFlagValues(configElemValue reflect.Value, flagPath []string, fullFlagName string, flagValues []string) error {
	mergeFunction := func(flagName string, currentFieldStruct reflect.StructField, currentFieldElemValue reflect.Value, currentElemValue reflect.Value) error {
		currentFieldKind := currentFieldStruct.Type.Kind()

		if currentFieldKind != reflect.Slice {
			return fmt.Errorf("invalid values %s for flag name %s, expected %s", strings.Join(flagValues, ","), flagName, currentFieldKind)
		}

		flagValueKind := currentFieldStruct.Type.Elem().Kind()

		parsedValues := reflect.MakeSlice(reflect.SliceOf(currentFieldStruct.Type.Elem()), 0, 0)
		for _, flagValue := range flagValues {
			parsedValue, err := getParsedValue(flagValueKind, flagValue)
			if err != nil {
				return fmt.Errorf("invalid value %s for flag name %s, expected %s", flagValue, flagName, flagValueKind)
			}

			parsedValues = reflect.Append(parsedValues, parsedValue)
		}

		currentFieldElemValue.Set(parsedValues)
		return nil
	}

	return mergeFlag(configElemValue, flagPath, fullFlagName, mergeFunction)
}

func mergeFlag(currentElemValue reflect.Value, currentFlagPath []string, fullFlagName string, mergeFunction func(flagName string, currentFieldStruct reflect.StructField, currentFieldElemValue reflect.Value, currentElemValue reflect.Value) error) error {
	if len(currentFlagPath) == 0 {
		return fmt.Errorf("flag \"%s\" not found", fullFlagName)
	}

	for i := 0; i < currentElemValue.NumField(); i++ {
		currentFieldStruct := currentElemValue.Type().Field(i)
		currentFieldElemValue := currentElemValue.FieldByName(currentFieldStruct.Name)

		if currentFieldStruct.Type.Kind() == reflect.Struct && getFieldNameByTag(currentFieldStruct) == currentFlagPath[0] {
			return mergeFlag(currentFieldElemValue, currentFlagPath[1:], fullFlagName, mergeFunction)
		}

		if len(currentFlagPath) > 1 || getFieldNameByTag(currentFieldStruct) != currentFlagPath[0] {
			continue
		}

		return mergeFunction(currentFlagPath[0], currentFieldStruct, currentFieldElemValue, currentElemValue)
	}

	return fmt.Errorf("flag \"%s\" not found", fullFlagName)
}

func getFieldNameByTag(field reflect.StructField) string {
	return strings.Split(field.Tag.Get(FieldNameTag), ",")[0]
}

func getParsedValue(kind reflect.Kind, value string) (reflect.Value, error) {
	switch kind {
	case reflect.String:
		return reflect.ValueOf(value), nil
	case reflect.Bool:
		boolArgumentValue, err := strconv.ParseBool(value)
		if err != nil {
			break
		}

		return reflect.ValueOf(boolArgumentValue), nil
	case reflect.Int:
		intArgumentValue, err := strconv.ParseInt(value, 10, 64)
		if err != nil {
			break
		}

		return reflect.ValueOf(int(intArgumentValue)), nil
	case reflect.Int8:
		intArgumentValue, err := strconv.ParseInt(value, 10, 8)
		if err != nil {
			break
		}

		return reflect.ValueOf(int8(intArgumentValue)), nil
	case reflect.Int16:
		intArgumentValue, err := strconv.ParseInt(value, 10, 16)
		if err != nil {
			break
		}

		return reflect.ValueOf(int16(intArgumentValue)), nil
	case reflect.Int32:
		intArgumentValue, err := strconv.ParseInt(value, 10, 32)
		if err != nil {
			break
		}

		return reflect.ValueOf(int32(intArgumentValue)), nil
	case reflect.Int64:
		intArgumentValue, err := strconv.ParseInt(value, 10, 64)
		if err != nil {
			break
		}

		return reflect.ValueOf(intArgumentValue), nil
	case reflect.Uint:
		uintArgumentValue, err := strconv.ParseUint(value, 10, 64)
		if err != nil {
			break
		}

		return reflect.ValueOf(uint(uintArgumentValue)), nil
	case reflect.Uint8:
		uintArgumentValue, err := strconv.ParseUint(value, 10, 8)
		if err != nil {
			break
		}

		return reflect.ValueOf(uint8(uintArgumentValue)), nil
	case reflect.Uint16:
		uintArgumentValue, err := strconv.ParseUint(value, 10, 16)
		if err != nil {
			break
		}

		return reflect.ValueOf(uint16(uintArgumentValue)), nil
	case reflect.Uint32:
		uintArgumentValue, err := strconv.ParseUint(value, 10, 32)
		if err != nil {
			break
		}

		return reflect.ValueOf(uint32(uintArgumentValue)), nil
	case reflect.Uint64:
		uintArgumentValue, err := strconv.ParseUint(value, 10, 64)
		if err != nil {
			break
		}

		return reflect.ValueOf(uintArgumentValue), nil
	}

	return reflect.ValueOf(nil), errors.New("value to parse does not match type")
}

func setZeroForReadonlyFields(currentElem reflect.Value) {
	for i := 0; i < currentElem.NumField(); i++ {
		currentField := currentElem.Type().Field(i)
		currentFieldByName := currentElem.FieldByName(currentField.Name)

		if currentField.Type.Kind() == reflect.Struct {
			setZeroForReadonlyFields(currentFieldByName)
			continue
		}

		if _, ok := currentField.Tag.Lookup(ReadonlyTag); ok {
			currentFieldByName.Set(reflect.Zero(currentField.Type))
		}
	}
}
0707010000001C000081A4000000000000000000000001689B9CB300001AE5000000000000000000000000000000000000002C00000000kubeshark-cli-52.8.1/config/configStruct.gopackage config

import (
	"os"
	"path/filepath"

	"github.com/kubeshark/kubeshark/config/configStructs"
	v1 "k8s.io/api/core/v1"
	"k8s.io/client-go/util/homedir"
)

const (
	KubeConfigPathConfigName = "kube-configPath"
)

func CreateDefaultConfig() ConfigStruct {
	return ConfigStruct{
		Tap: configStructs.TapConfig{
			NodeSelectorTerms: configStructs.NodeSelectorTermsConfig{
				Workers: []v1.NodeSelectorTerm{
					{
						MatchExpressions: []v1.NodeSelectorRequirement{
							{
								Key:      "kubernetes.io/os",
								Operator: v1.NodeSelectorOpIn,
								Values:   []string{"linux"},
							},
						},
					},
				},
				Hub: []v1.NodeSelectorTerm{
					{
						MatchExpressions: []v1.NodeSelectorRequirement{
							{
								Key:      "kubernetes.io/os",
								Operator: v1.NodeSelectorOpIn,
								Values:   []string{"linux"},
							},
						},
					},
				},
				Front: []v1.NodeSelectorTerm{
					{
						MatchExpressions: []v1.NodeSelectorRequirement{
							{
								Key:      "kubernetes.io/os",
								Operator: v1.NodeSelectorOpIn,
								Values:   []string{"linux"},
							},
						},
					},
				},
				Dex: []v1.NodeSelectorTerm{
					{
						MatchExpressions: []v1.NodeSelectorRequirement{
							{
								Key:      "kubernetes.io/os",
								Operator: v1.NodeSelectorOpIn,
								Values:   []string{"linux"},
							},
						},
					},
				},
			},
			Tolerations: configStructs.TolerationsConfig{
				Workers: []v1.Toleration{
					{
						Effect:   v1.TaintEffect("NoExecute"),
						Operator: v1.TolerationOpExists,
					},
				},
			},
			SecurityContext: configStructs.SecurityContextConfig{
				Privileged: true,
				// Capabilities used only when running in unprivileged mode
				Capabilities: configStructs.CapabilitiesConfig{
					NetworkCapture: []string{
						// NET_RAW is required to listen the network traffic
						"NET_RAW",
						// NET_ADMIN is required to listen the network traffic
						"NET_ADMIN",
					},
					ServiceMeshCapture: []string{
						// SYS_ADMIN is required to read /proc/PID/net/ns + to install eBPF programs (kernel < 5.8)
						"SYS_ADMIN",
						// SYS_PTRACE is required to set netns to other process + to open libssl.so of other process
						"SYS_PTRACE",
						// DAC_OVERRIDE is required to read /proc/PID/environ
						"DAC_OVERRIDE",
					},
					EBPFCapture: []string{
						// SYS_ADMIN is required to read /proc/PID/net/ns + to install eBPF programs (kernel < 5.8)
						"SYS_ADMIN",
						// SYS_PTRACE is required to set netns to other process + to open libssl.so of other process
						"SYS_PTRACE",
						// SYS_RESOURCE is required to change rlimits for eBPF
						"SYS_RESOURCE",
						// IPC_LOCK is required for ebpf perf buffers allocations after some amount of size buffer size:
						// https://github.com/kubeshark/tracer/blob/13e24725ba8b98216dd0e553262e6d9c56dce5fa/main.go#L82)
						"IPC_LOCK",
					},
				},
			},
			Auth: configStructs.AuthConfig{
				Saml: configStructs.SamlConfig{
					RoleAttribute: "role",
					Roles: map[string]configStructs.Role{
						"admin": {
							Filter:          "",
							CanDownloadPCAP: true,
							CanUseScripting: true,
							ScriptingPermissions: configStructs.ScriptingPermissions{
								CanSave:     true,
								CanActivate: true,
								CanDelete:   true,
							},
							CanUpdateTargetedPods:   true,
							CanStopTrafficCapturing: true,
							ShowAdminConsoleLink:    true,
						},
					},
				},
			},
			EnabledDissectors: []string{
				"amqp",
				"dns",
				"http",
				"icmp",
				"kafka",
				"redis",
				// "sctp",
				// "syscall",
				// "tcp",
				// "udp",
				"ws",
				// "tlsx",
				"ldap",
				"radius",
				"diameter",
			},
			PortMapping: configStructs.PortMapping{
				HTTP:     []uint16{80, 443, 8080},
				AMQP:     []uint16{5671, 5672},
				KAFKA:    []uint16{9092},
				REDIS:    []uint16{6379},
				LDAP:     []uint16{389},
				DIAMETER: []uint16{3868},
			},
			Dashboard: configStructs.DashboardConfig{
				CompleteStreamingEnabled: true,
			},
			Capture: configStructs.CaptureConfig{
				Stopped:   false,
				StopAfter: "5m",
			},
		},
	}
}

type KubeConfig struct {
	ConfigPathStr string `yaml:"configPath" json:"configPath"`
	Context       string `yaml:"context" json:"context"`
}

type ManifestsConfig struct {
	Dump bool `yaml:"dump" json:"dump"`
}

type ConfigStruct struct {
	Tap                  configStructs.TapConfig       `yaml:"tap" json:"tap"`
	Logs                 configStructs.LogsConfig      `yaml:"logs" json:"logs"`
	Config               configStructs.ConfigConfig    `yaml:"config,omitempty" json:"config,omitempty"`
	PcapDump             configStructs.PcapDumpConfig  `yaml:"pcapdump" json:"pcapdump"`
	Kube                 KubeConfig                    `yaml:"kube" json:"kube"`
	DumpLogs             bool                          `yaml:"dumpLogs" json:"dumpLogs" default:"false"`
	HeadlessMode         bool                          `yaml:"headless" json:"headless" default:"false"`
	License              string                        `yaml:"license" json:"license" default:""`
	CloudLicenseEnabled  bool                          `yaml:"cloudLicenseEnabled" json:"cloudLicenseEnabled" default:"true"`
	AiAssistantEnabled   bool                          `yaml:"aiAssistantEnabled" json:"aiAssistantEnabled" default:"true"`
	DemoModeEnabled      bool                          `yaml:"demoModeEnabled" json:"demoModeEnabled" default:"false"`
	SupportChatEnabled   bool                          `yaml:"supportChatEnabled" json:"supportChatEnabled" default:"true"`
	BetaEnabled          bool                          `yaml:"betaEnabled" json:"betaEnabled" default:"false"`
	InternetConnectivity bool                          `yaml:"internetConnectivity" json:"internetConnectivity" default:"true"`
	Scripting            configStructs.ScriptingConfig `yaml:"scripting" json:"scripting"`
	Manifests            ManifestsConfig               `yaml:"manifests,omitempty" json:"manifests,omitempty"`
	Timezone             string                        `yaml:"timezone" json:"timezone"`
	LogLevel             string                        `yaml:"logLevel" json:"logLevel" default:"warning"`
}

func (config *ConfigStruct) ImagePullPolicy() v1.PullPolicy {
	return v1.PullPolicy(config.Tap.Docker.ImagePullPolicy)
}

func (config *ConfigStruct) ImagePullSecrets() []v1.LocalObjectReference {
	var ref []v1.LocalObjectReference
	for _, name := range config.Tap.Docker.ImagePullSecrets {
		ref = append(ref, v1.LocalObjectReference{Name: name})
	}

	return ref
}

func (config *ConfigStruct) KubeConfigPath() string {
	if config.Kube.ConfigPathStr != "" {
		return config.Kube.ConfigPathStr
	}

	envKubeConfigPath := os.Getenv("KUBECONFIG")
	if envKubeConfigPath != "" {
		return envKubeConfigPath
	}

	home := homedir.HomeDir()
	return filepath.Join(home, ".kube", "config")
}
0707010000001D000041ED000000000000000000000002689B9CB300000000000000000000000000000000000000000000002A00000000kubeshark-cli-52.8.1/config/configStructs0707010000001E000081A4000000000000000000000001689B9CB3000000CB000000000000000000000000000000000000003A00000000kubeshark-cli-52.8.1/config/configStructs/configConfig.gopackage configStructs

const (
	RegenerateConfigName = "regenerate"
)

type ConfigConfig struct {
	Regenerate bool `yaml:"regenerate,omitempty" json:"regenerate,omitempty" default:"false" readonly:""`
}
0707010000001F000081A4000000000000000000000001689B9CB3000002C4000000000000000000000000000000000000003800000000kubeshark-cli-52.8.1/config/configStructs/logsConfig.gopackage configStructs

import (
	"fmt"
	"os"
	"path"

	"github.com/kubeshark/kubeshark/misc"
)

const (
	FileLogsName = "file"
	GrepLogsName = "grep"
)

type LogsConfig struct {
	FileStr string `yaml:"file" json:"file"`
	Grep    string `yaml:"grep" json:"grep"`
}

func (config *LogsConfig) Validate() error {
	if config.FileStr == "" {
		_, err := os.Getwd()
		if err != nil {
			return fmt.Errorf("failed to get PWD, %v (try using `%s logs -f <full path dest zip file>)`", err, misc.Program)
		}
	}

	return nil
}

func (config *LogsConfig) FilePath() string {
	if config.FileStr == "" {
		pwd, _ := os.Getwd()
		return path.Join(pwd, fmt.Sprintf("%s_logs.zip", misc.Program))
	}

	return config.FileStr
}
07070100000020000081A4000000000000000000000001689B9CB3000009F1000000000000000000000000000000000000003D00000000kubeshark-cli-52.8.1/config/configStructs/scriptingConfig.gopackage configStructs

import (
	"fmt"
	"io/fs"
	"os"
	"path/filepath"
	"strings"

	"github.com/kubeshark/kubeshark/misc"
	"github.com/rs/zerolog/log"
)

type ScriptingConfig struct {
	Env          map[string]interface{} `yaml:"env" json:"env" default:"{}"`
	Source       string                 `yaml:"source" json:"source" default:""`
	Sources      []string               `yaml:"sources" json:"sources" default:"[]"`
	WatchScripts bool                   `yaml:"watchScripts" json:"watchScripts" default:"true"`
	Active       []string               `yaml:"active" json:"active" default:"[]"`
	Console      bool                   `yaml:"console" json:"console" default:"true"`
}

func (config *ScriptingConfig) GetScripts() (scripts []*misc.Script, err error) {
	// Check if both Source and Sources are empty
	if config.Source == "" && len(config.Sources) == 0 {
		return nil, nil
	}

	var allFiles []struct {
		Source string
		File   fs.DirEntry
	}

	// Handle single Source directory
	if config.Source != "" {
		files, err := os.ReadDir(config.Source)
		if err != nil {
			return nil, fmt.Errorf("failed to read directory %s: %v", config.Source, err)
		}
		for _, file := range files {
			allFiles = append(allFiles, struct {
				Source string
				File   fs.DirEntry
			}{Source: config.Source, File: file})
		}
	}

	// Handle multiple Sources directories
	if len(config.Sources) > 0 {
		for _, source := range config.Sources {
			files, err := os.ReadDir(source)
			if err != nil {
				return nil, fmt.Errorf("failed to read directory %s: %v", source, err)
			}
			for _, file := range files {
				allFiles = append(allFiles, struct {
					Source string
					File   fs.DirEntry
				}{Source: source, File: file})
			}
		}
	}

	// Iterate over all collected files
	for _, f := range allFiles {
		if f.File.IsDir() {
			continue
		}

		// Construct the full path based on the relevant source directory
		path := filepath.Join(f.Source, f.File.Name())
		if !strings.HasSuffix(f.File.Name(), ".js") { // Use file name suffix for skipping non-JS files
			log.Info().Str("path", path).Msg("Skipping non-JS file")
			continue
		}

		// Read the script file
		var script *misc.Script
		script, err = misc.ReadScriptFile(path)
		if err != nil {
			return nil, fmt.Errorf("failed to read script file %s: %v", path, err)
		}

		// Append the valid script to the scripts slice
		scripts = append(scripts, script)

		log.Debug().Str("path", path).Msg("Found script:")
	}

	// Return the collected scripts and nil error if successful
	return scripts, nil
}
07070100000021000081A4000000000000000000000001689B9CB3000045F0000000000000000000000000000000000000003700000000kubeshark-cli-52.8.1/config/configStructs/tapConfig.gopackage configStructs

import (
	"fmt"
	"regexp"

	v1 "k8s.io/api/core/v1"
	networking "k8s.io/api/networking/v1"
)

const (
	DockerRegistryLabel          = "docker-registry"
	DockerTagLabel               = "docker-tag"
	DockerImagePullPolicy        = "docker-imagePullPolicy"
	DockerImagePullSecrets       = "docker-imagePullSecrets"
	ProxyFrontPortLabel          = "proxy-front-port"
	ProxyHubPortLabel            = "proxy-hub-port"
	ProxyHostLabel               = "proxy-host"
	NamespacesLabel              = "namespaces"
	ExcludedNamespacesLabel      = "excludedNamespaces"
	ReleaseNamespaceLabel        = "release-namespace"
	PersistentStorageLabel       = "persistentStorage"
	PersistentStorageStaticLabel = "persistentStorageStatic"
	EfsFileSytemIdAndPathLabel   = "efsFileSytemIdAndPath"
	StorageLimitLabel            = "storageLimit"
	StorageClassLabel            = "storageClass"
	DryRunLabel                  = "dryRun"
	PcapLabel                    = "pcap"
	ServiceMeshLabel             = "serviceMesh"
	TlsLabel                     = "tls"
	IgnoreTaintedLabel           = "ignoreTainted"
	IngressEnabledLabel          = "ingress-enabled"
	TelemetryEnabledLabel        = "telemetry-enabled"
	ResourceGuardEnabledLabel    = "resource-guard-enabled"
	PprofPortLabel               = "pprof-port"
	PprofViewLabel               = "pprof-view"
	DebugLabel                   = "debug"
	ContainerPort                = 8080
	ContainerPortStr             = "8080"
	PcapDest                     = "dest"
	PcapMaxSize                  = "maxSize"
	PcapMaxTime                  = "maxTime"
	PcapTimeInterval             = "timeInterval"
	PcapKubeconfig               = "kubeconfig"
	PcapDumpEnabled              = "enabled"
	PcapTime                     = "time"
	WatchdogEnabled              = "watchdogEnabled"
)

type ResourceLimitsHub struct {
	CPU    string `yaml:"cpu" json:"cpu" default:"0"`
	Memory string `yaml:"memory" json:"memory" default:"5Gi"`
}

type ResourceLimitsWorker struct {
	CPU    string `yaml:"cpu" json:"cpu" default:"0"`
	Memory string `yaml:"memory" json:"memory" default:"3Gi"`
}

type ResourceRequests struct {
	CPU    string `yaml:"cpu" json:"cpu" default:"50m"`
	Memory string `yaml:"memory" json:"memory" default:"50Mi"`
}

type ResourceRequirementsHub struct {
	Limits   ResourceLimitsHub `yaml:"limits" json:"limits"`
	Requests ResourceRequests  `yaml:"requests" json:"requests"`
}

type ResourceRequirementsWorker struct {
	Limits   ResourceLimitsHub `yaml:"limits" json:"limits"`
	Requests ResourceRequests  `yaml:"requests" json:"requests"`
}

type WorkerConfig struct {
	SrvPort uint16 `yaml:"srvPort" json:"srvPort" default:"48999"`
}

type HubConfig struct {
	SrvPort uint16 `yaml:"srvPort" json:"srvPort" default:"8898"`
}

type FrontConfig struct {
	Port uint16 `yaml:"port" json:"port" default:"8899"`
}

type ProxyConfig struct {
	Worker WorkerConfig `yaml:"worker" json:"worker"`
	Hub    HubConfig    `yaml:"hub" json:"hub"`
	Front  FrontConfig  `yaml:"front" json:"front"`
	Host   string       `yaml:"host" json:"host" default:"127.0.0.1"`
}

type OverrideImageConfig struct {
	Worker string `yaml:"worker" json:"worker"`
	Hub    string `yaml:"hub" json:"hub"`
	Front  string `yaml:"front" json:"front"`
}
type OverrideTagConfig struct {
	Worker string `yaml:"worker" json:"worker"`
	Hub    string `yaml:"hub" json:"hub"`
	Front  string `yaml:"front" json:"front"`
}

type DockerConfig struct {
	Registry         string              `yaml:"registry" json:"registry" default:"docker.io/kubeshark"`
	Tag              string              `yaml:"tag" json:"tag" default:""`
	TagLocked        bool                `yaml:"tagLocked" json:"tagLocked" default:"true"`
	ImagePullPolicy  string              `yaml:"imagePullPolicy" json:"imagePullPolicy" default:"Always"`
	ImagePullSecrets []string            `yaml:"imagePullSecrets" json:"imagePullSecrets"`
	OverrideImage    OverrideImageConfig `yaml:"overrideImage" json:"overrideImage"`
	OverrideTag      OverrideTagConfig   `yaml:"overrideTag" json:"overrideTag"`
}

type DnsConfig struct {
	Nameservers []string          `yaml:"nameservers" json:"nameservers" default:"[]"`
	Searches    []string          `yaml:"searches" json:"searches" default:"[]"`
	Options     []DnsConfigOption `yaml:"options" json:"options" default:"[]"`
}

type DnsConfigOption struct {
	Name  string `yaml:"name" json:"name"`
	Value string `yaml:"value" json:"value"`
}

type ResourcesConfig struct {
	Hub     ResourceRequirementsHub    `yaml:"hub" json:"hub"`
	Sniffer ResourceRequirementsWorker `yaml:"sniffer" json:"sniffer"`
	Tracer  ResourceRequirementsWorker `yaml:"tracer" json:"tracer"`
}

type ProbesConfig struct {
	Hub     ProbeConfig `yaml:"hub" json:"hub"`
	Sniffer ProbeConfig `yaml:"sniffer" json:"sniffer"`
}

type NodeSelectorTermsConfig struct {
	Hub     []v1.NodeSelectorTerm `yaml:"hub" json:"hub" default:"[]"`
	Workers []v1.NodeSelectorTerm `yaml:"workers" json:"workers" default:"[]"`
	Front   []v1.NodeSelectorTerm `yaml:"front" json:"front" default:"[]"`
	Dex     []v1.NodeSelectorTerm `yaml:"dex" json:"dex" default:"[]"`
}

type TolerationsConfig struct {
	Hub     []v1.Toleration `yaml:"hub" json:"hub" default:"[]"`
	Workers []v1.Toleration `yaml:"workers" json:"workers" default:"[]"`
	Front   []v1.Toleration `yaml:"front" json:"front" default:"[]"`
}

type ProbeConfig struct {
	InitialDelaySeconds int `yaml:"initialDelaySeconds" json:"initialDelaySeconds" default:"5"`
	PeriodSeconds       int `yaml:"periodSeconds" json:"periodSeconds" default:"5"`
	SuccessThreshold    int `yaml:"successThreshold" json:"successThreshold" default:"1"`
	FailureThreshold    int `yaml:"failureThreshold" json:"failureThreshold" default:"3"`
}

type ScriptingPermissions struct {
	CanSave     bool `yaml:"canSave" json:"canSave" default:"true"`
	CanActivate bool `yaml:"canActivate" json:"canActivate" default:"true"`
	CanDelete   bool `yaml:"canDelete" json:"canDelete" default:"true"`
}

type Role struct {
	Filter                  string               `yaml:"filter" json:"filter" default:""`
	CanDownloadPCAP         bool                 `yaml:"canDownloadPCAP" json:"canDownloadPCAP" default:"false"`
	CanUseScripting         bool                 `yaml:"canUseScripting" json:"canUseScripting" default:"false"`
	ScriptingPermissions    ScriptingPermissions `yaml:"scriptingPermissions" json:"scriptingPermissions"`
	CanUpdateTargetedPods   bool                 `yaml:"canUpdateTargetedPods" json:"canUpdateTargetedPods" default:"false"`
	CanStopTrafficCapturing bool                 `yaml:"canStopTrafficCapturing" json:"canStopTrafficCapturing" default:"false"`
	ShowAdminConsoleLink    bool                 `yaml:"showAdminConsoleLink" json:"showAdminConsoleLink" default:"false"`
}

type SamlConfig struct {
	IdpMetadataUrl string          `yaml:"idpMetadataUrl" json:"idpMetadataUrl"`
	X509crt        string          `yaml:"x509crt" json:"x509crt"`
	X509key        string          `yaml:"x509key" json:"x509key"`
	RoleAttribute  string          `yaml:"roleAttribute" json:"roleAttribute"`
	Roles          map[string]Role `yaml:"roles" json:"roles"`
}

type AuthConfig struct {
	Enabled bool       `yaml:"enabled" json:"enabled" default:"false"`
	Type    string     `yaml:"type" json:"type" default:"saml"`
	Saml    SamlConfig `yaml:"saml" json:"saml"`
}

type IngressConfig struct {
	Enabled     bool                    `yaml:"enabled" json:"enabled" default:"false"`
	ClassName   string                  `yaml:"className" json:"className" default:""`
	Host        string                  `yaml:"host" json:"host" default:"ks.svc.cluster.local"`
	TLS         []networking.IngressTLS `yaml:"tls" json:"tls" default:"[]"`
	Annotations map[string]string       `yaml:"annotations" json:"annotations" default:"{}"`
}

type RoutingConfig struct {
	Front FrontRoutingConfig `yaml:"front" json:"front"`
}

type DashboardConfig struct {
	CompleteStreamingEnabled bool `yaml:"completeStreamingEnabled" json:"completeStreamingEnabled" default:"true"`
}

type FrontRoutingConfig struct {
	BasePath string `yaml:"basePath" json:"basePath" default:""`
}

type ReleaseConfig struct {
	Repo      string `yaml:"repo" json:"repo" default:"https://helm.kubeshark.co"`
	Name      string `yaml:"name" json:"name" default:"kubeshark"`
	Namespace string `yaml:"namespace" json:"namespace" default:"default"`
}

type TelemetryConfig struct {
	Enabled bool `yaml:"enabled" json:"enabled" default:"true"`
}

type ResourceGuardConfig struct {
	Enabled bool `yaml:"enabled" json:"enabled" default:"false"`
}

type SentryConfig struct {
	Enabled     bool   `yaml:"enabled" json:"enabled" default:"false"`
	Environment string `yaml:"environment" json:"environment" default:"production"`
}

type WatchdogConfig struct {
	Enabled bool `yaml:"enabled" json:"enabled" default:"false"`
}

type GitopsConfig struct {
	Enabled bool `yaml:"enabled" json:"enabled" default:"false"`
}

type CapabilitiesConfig struct {
	NetworkCapture     []string `yaml:"networkCapture" json:"networkCapture"  default:"[]"`
	ServiceMeshCapture []string `yaml:"serviceMeshCapture" json:"serviceMeshCapture"  default:"[]"`
	EBPFCapture        []string `yaml:"ebpfCapture" json:"ebpfCapture"  default:"[]"`
}

type MetricsConfig struct {
	Port uint16 `yaml:"port" json:"port" default:"49100"`
}

type PprofConfig struct {
	Enabled bool   `yaml:"enabled" json:"enabled" default:"false"`
	Port    uint16 `yaml:"port" json:"port" default:"8000"`
	View    string `yaml:"view" json:"view" default:"flamegraph"`
}

type MiscConfig struct {
	JsonTTL                     string `yaml:"jsonTTL" json:"jsonTTL" default:"5m"`
	PcapTTL                     string `yaml:"pcapTTL" json:"pcapTTL" default:"10s"`
	PcapErrorTTL                string `yaml:"pcapErrorTTL" json:"pcapErrorTTL" default:"60s"`
	TrafficSampleRate           int    `yaml:"trafficSampleRate" json:"trafficSampleRate" default:"100"`
	TcpStreamChannelTimeoutMs   int    `yaml:"tcpStreamChannelTimeoutMs" json:"tcpStreamChannelTimeoutMs" default:"10000"`
	TcpStreamChannelTimeoutShow bool   `yaml:"tcpStreamChannelTimeoutShow" json:"tcpStreamChannelTimeoutShow" default:"false"`
	ResolutionStrategy          string `yaml:"resolutionStrategy" json:"resolutionStrategy" default:"auto"`
	DuplicateTimeframe          string `yaml:"duplicateTimeframe" json:"duplicateTimeframe" default:"200ms"`
	DetectDuplicates            bool   `yaml:"detectDuplicates" json:"detectDuplicates" default:"false"`
	StaleTimeoutSeconds         int    `yaml:"staleTimeoutSeconds" json:"staleTimeoutSeconds" default:"30"`
}

type PcapDumpConfig struct {
	PcapDumpEnabled  bool   `yaml:"enabled" json:"enabled" default:"true"`
	PcapTimeInterval string `yaml:"timeInterval" json:"timeInterval" default:"1m"`
	PcapMaxTime      string `yaml:"maxTime" json:"maxTime" default:"1h"`
	PcapMaxSize      string `yaml:"maxSize" json:"maxSize" default:"500MB"`
	PcapTime         string `yaml:"time" json:"time" default:"time"`
	PcapDebug        bool   `yaml:"debug" json:"debug" default:"false"`
	PcapDest         string `yaml:"dest" json:"dest" default:""`
}

type PortMapping struct {
	HTTP     []uint16 `yaml:"http" json:"http"`
	AMQP     []uint16 `yaml:"amqp" json:"amqp"`
	KAFKA    []uint16 `yaml:"kafka" json:"kafka"`
	REDIS    []uint16 `yaml:"redis" json:"redis"`
	LDAP     []uint16 `yaml:"ldap" json:"ldap"`
	DIAMETER []uint16 `yaml:"diameter" json:"diameter"`
}

type SecurityContextConfig struct {
	Privileged      bool                  `yaml:"privileged" json:"privileged" default:"true"`
	AppArmorProfile AppArmorProfileConfig `yaml:"appArmorProfile" json:"appArmorProfile"`
	SeLinuxOptions  SeLinuxOptionsConfig  `yaml:"seLinuxOptions" json:"seLinuxOptions"`
	Capabilities    CapabilitiesConfig    `yaml:"capabilities" json:"capabilities"`
}

type AppArmorProfileConfig struct {
	Type             string `yaml:"type" json:"type"`
	LocalhostProfile string `yaml:"localhostProfile" json:"localhostProfile"`
}

type SeLinuxOptionsConfig struct {
	Level string `yaml:"level" json:"level"`
	Role  string `yaml:"role" json:"role"`
	Type  string `yaml:"type" json:"type"`
	User  string `yaml:"user" json:"user"`
}

type CaptureConfig struct {
	Stopped   bool   `yaml:"stopped" json:"stopped" default:"false"`
	StopAfter string `yaml:"stopAfter" json:"stopAfter" default:"5m"`
}

type TapConfig struct {
	Docker                         DockerConfig            `yaml:"docker" json:"docker"`
	Proxy                          ProxyConfig             `yaml:"proxy" json:"proxy"`
	PodRegexStr                    string                  `yaml:"regex" json:"regex" default:".*"`
	Namespaces                     []string                `yaml:"namespaces" json:"namespaces" default:"[]"`
	ExcludedNamespaces             []string                `yaml:"excludedNamespaces" json:"excludedNamespaces" default:"[]"`
	BpfOverride                    string                  `yaml:"bpfOverride" json:"bpfOverride" default:""`
	Capture                        CaptureConfig           `yaml:"capture" json:"capture"`
	Release                        ReleaseConfig           `yaml:"release" json:"release"`
	PersistentStorage              bool                    `yaml:"persistentStorage" json:"persistentStorage" default:"false"`
	PersistentStorageStatic        bool                    `yaml:"persistentStorageStatic" json:"persistentStorageStatic" default:"false"`
	PersistentStoragePvcVolumeMode string                  `yaml:"persistentStoragePvcVolumeMode" json:"persistentStoragePvcVolumeMode" default:"FileSystem"`
	EfsFileSytemIdAndPath          string                  `yaml:"efsFileSytemIdAndPath" json:"efsFileSytemIdAndPath" default:""`
	Secrets                        []string                `yaml:"secrets" json:"secrets" default:"[]"`
	StorageLimit                   string                  `yaml:"storageLimit" json:"storageLimit" default:"5Gi"`
	StorageClass                   string                  `yaml:"storageClass" json:"storageClass" default:"standard"`
	DryRun                         bool                    `yaml:"dryRun" json:"dryRun" default:"false"`
	DnsConfig                      DnsConfig               `yaml:"dns" json:"dns"`
	Resources                      ResourcesConfig         `yaml:"resources" json:"resources"`
	Probes                         ProbesConfig            `yaml:"probes" json:"probes"`
	ServiceMesh                    bool                    `yaml:"serviceMesh" json:"serviceMesh" default:"true"`
	Tls                            bool                    `yaml:"tls" json:"tls" default:"true"`
	DisableTlsLog                  bool                    `yaml:"disableTlsLog" json:"disableTlsLog" default:"true"`
	PacketCapture                  string                  `yaml:"packetCapture" json:"packetCapture" default:"best"`
	Labels                         map[string]string       `yaml:"labels" json:"labels" default:"{}"`
	Annotations                    map[string]string       `yaml:"annotations" json:"annotations" default:"{}"`
	NodeSelectorTerms              NodeSelectorTermsConfig `yaml:"nodeSelectorTerms" json:"nodeSelectorTerms" default:"{}"`
	Tolerations                    TolerationsConfig       `yaml:"tolerations" json:"tolerations" default:"{}"`
	Auth                           AuthConfig              `yaml:"auth" json:"auth"`
	Ingress                        IngressConfig           `yaml:"ingress" json:"ingress"`
	PriorityClass                  string                  `yaml:"priorityClass" json:"priorityClass" default:""`
	Routing                        RoutingConfig           `yaml:"routing" json:"routing"`
	IPv6                           bool                    `yaml:"ipv6" json:"ipv6" default:"true"`
	Debug                          bool                    `yaml:"debug" json:"debug" default:"false"`
	Dashboard                      DashboardConfig         `yaml:"dashboard" json:"dashboard"`
	Telemetry                      TelemetryConfig         `yaml:"telemetry" json:"telemetry"`
	ResourceGuard                  ResourceGuardConfig     `yaml:"resourceGuard" json:"resourceGuard"`
	Watchdog                       WatchdogConfig          `yaml:"watchdog" json:"watchdog"`
	Gitops                         GitopsConfig            `yaml:"gitops" json:"gitops"`
	Sentry                         SentryConfig            `yaml:"sentry" json:"sentry"`
	DefaultFilter                  string                  `yaml:"defaultFilter" json:"defaultFilter" default:"!dns and !error"`
	LiveConfigMapChangesDisabled   bool                    `yaml:"liveConfigMapChangesDisabled" json:"liveConfigMapChangesDisabled" default:"false"`
	GlobalFilter                   string                  `yaml:"globalFilter" json:"globalFilter" default:""`
	EnabledDissectors              []string                `yaml:"enabledDissectors" json:"enabledDissectors"`
	PortMapping                    PortMapping             `yaml:"portMapping" json:"portMapping"`
	CustomMacros                   map[string]string       `yaml:"customMacros" json:"customMacros" default:"{\"https\":\"tls and (http or http2)\"}"`
	Metrics                        MetricsConfig           `yaml:"metrics" json:"metrics"`
	Pprof                          PprofConfig             `yaml:"pprof" json:"pprof"`
	Misc                           MiscConfig              `yaml:"misc" json:"misc"`
	SecurityContext                SecurityContextConfig   `yaml:"securityContext" json:"securityContext"`
	MountBpf                       bool                    `yaml:"mountBpf" json:"mountBpf" default:"true"`
}

func (config *TapConfig) PodRegex() *regexp.Regexp {
	podRegex, _ := regexp.Compile(config.PodRegexStr)
	return podRegex
}

func (config *TapConfig) Validate() error {
	_, compileErr := regexp.Compile(config.PodRegexStr)
	if compileErr != nil {
		return fmt.Errorf("%s is not a valid regex %s", config.PodRegexStr, compileErr)
	}

	return nil
}
07070100000022000081A4000000000000000000000001689B9CB3000040B1000000000000000000000000000000000000003400000000kubeshark-cli-52.8.1/config/config_internal_test.gopackage config

import (
	"fmt"
	"reflect"
	"testing"
)

type ConfigMock struct {
	SectionMock      SectionMock `yaml:"section"`
	Test             string      `yaml:"test"`
	StringField      string      `yaml:"string-field"`
	IntField         int         `yaml:"int-field"`
	BoolField        bool        `yaml:"bool-field"`
	UintField        uint        `yaml:"uint-field"`
	StringSliceField []string    `yaml:"string-slice-field"`
	IntSliceField    []int       `yaml:"int-slice-field"`
	BoolSliceField   []bool      `yaml:"bool-slice-field"`
	UintSliceField   []uint      `yaml:"uint-slice-field"`
}

type SectionMock struct {
	Test string `yaml:"test"`
}

type FieldSetValues struct {
	SetValues  []string
	FieldName  string
	FieldValue interface{}
}

func TestMergeSetFlagNoSeparator(t *testing.T) {
	tests := []struct {
		Name      string
		SetValues []string
	}{
		{Name: "empty value", SetValues: []string{""}},
		{Name: "single char", SetValues: []string{"t"}},
		{Name: "combine empty value and single char", SetValues: []string{"", "t"}},
		{Name: "two values without separator", SetValues: []string{"test", "test:true"}},
		{Name: "four values without separator", SetValues: []string{"test", "test:true", "testing!", "true"}},
	}

	for _, test := range tests {
		t.Run(test.Name, func(t *testing.T) {
			configMock := ConfigMock{}
			configMockElemValue := reflect.ValueOf(&configMock).Elem()

			err := mergeSetFlag(configMockElemValue, test.SetValues)

			if err == nil {
				t.Errorf("unexpected unhandled error - SetValues: %v", test.SetValues)
				return
			}

			for i := 0; i < configMockElemValue.NumField(); i++ {
				currentField := configMockElemValue.Type().Field(i)
				currentFieldByName := configMockElemValue.FieldByName(currentField.Name)

				if !currentFieldByName.IsZero() {
					t.Errorf("unexpected value with not default value - SetValues: %v", test.SetValues)
				}
			}
		})
	}
}

func TestMergeSetFlagInvalidFlagName(t *testing.T) {
	tests := []struct {
		Name      string
		SetValues []string
	}{
		{Name: "invalid flag name", SetValues: []string{"invalid_flag=true"}},
		{Name: "invalid flag name inside section struct", SetValues: []string{"section.invalid_flag=test"}},
		{Name: "flag name is a struct", SetValues: []string{"section=test"}},
		{Name: "empty flag name", SetValues: []string{"=true"}},
		{Name: "four tests combined", SetValues: []string{"invalid_flag=true", "config.invalid_flag=test", "section=test", "=true"}},
	}

	for _, test := range tests {
		t.Run(test.Name, func(t *testing.T) {
			configMock := ConfigMock{}
			configMockElemValue := reflect.ValueOf(&configMock).Elem()

			err := mergeSetFlag(configMockElemValue, test.SetValues)

			if err == nil {
				t.Errorf("unexpected unhandled error - SetValues: %v", test.SetValues)
				return
			}

			for i := 0; i < configMockElemValue.NumField(); i++ {
				currentField := configMockElemValue.Type().Field(i)
				currentFieldByName := configMockElemValue.FieldByName(currentField.Name)

				if !currentFieldByName.IsZero() {
					t.Errorf("unexpected case - SetValues: %v", test.SetValues)
				}
			}
		})
	}
}

func TestMergeSetFlagInvalidFlagValue(t *testing.T) {
	tests := []struct {
		Name      string
		SetValues []string
	}{
		{Name: "bool value to int field", SetValues: []string{"int-field=true"}},
		{Name: "int value to bool field", SetValues: []string{"bool-field:5"}},
		{Name: "int value to uint field", SetValues: []string{"uint-field=-1"}},
		{Name: "bool value to int slice field", SetValues: []string{"int-slice-field=true"}},
		{Name: "int value to bool slice field", SetValues: []string{"bool-slice-field=5"}},
		{Name: "int value to uint slice field", SetValues: []string{"uint-slice-field=-1"}},
		{Name: "int slice value to int field", SetValues: []string{"int-field=6", "int-field=66"}},
	}

	for _, test := range tests {
		t.Run(test.Name, func(t *testing.T) {
			configMock := ConfigMock{}
			configMockElemValue := reflect.ValueOf(&configMock).Elem()

			err := mergeSetFlag(configMockElemValue, test.SetValues)

			if err == nil {
				t.Errorf("unexpected unhandled error - SetValues: %v", test.SetValues)
				return
			}

			for i := 0; i < configMockElemValue.NumField(); i++ {
				currentField := configMockElemValue.Type().Field(i)
				currentFieldByName := configMockElemValue.FieldByName(currentField.Name)

				if !currentFieldByName.IsZero() {
					t.Errorf("unexpected case - SetValues: %v", test.SetValues)
				}
			}
		})
	}
}

func TestMergeSetFlagNotSliceValues(t *testing.T) {
	tests := []struct {
		Name            string
		FieldsSetValues []FieldSetValues
	}{
		{Name: "string field", FieldsSetValues: []FieldSetValues{{SetValues: []string{"string-field=test"}, FieldName: "StringField", FieldValue: "test"}}},
		{Name: "int field", FieldsSetValues: []FieldSetValues{{SetValues: []string{"int-field=6"}, FieldName: "IntField", FieldValue: 6}}},
		{Name: "bool field", FieldsSetValues: []FieldSetValues{{SetValues: []string{"bool-field=true"}, FieldName: "BoolField", FieldValue: true}}},
		{Name: "uint field", FieldsSetValues: []FieldSetValues{{SetValues: []string{"uint-field=6"}, FieldName: "UintField", FieldValue: uint(6)}}},
		{Name: "four fields combined", FieldsSetValues: []FieldSetValues {
			{SetValues: []string{"string-field=test"}, FieldName: "StringField", FieldValue: "test"},
			{SetValues: []string{"int-field=6"}, FieldName: "IntField", FieldValue: 6},
			{SetValues: []string{"bool-field=true"}, FieldName: "BoolField", FieldValue: true},
			{SetValues: []string{"uint-field=6"}, FieldName: "UintField", FieldValue: uint(6)},
		}},
	}

	for _, test := range tests {
		t.Run(test.Name, func(t *testing.T) {
			configMock := ConfigMock{}
			configMockElemValue := reflect.ValueOf(&configMock).Elem()

			var setValues []string
			for _, fieldSetValues := range test.FieldsSetValues {
				setValues = append(setValues, fieldSetValues.SetValues...)
			}

			err := mergeSetFlag(configMockElemValue, setValues)

			if err != nil {
				t.Errorf("unexpected error result - err: %v", err)
				return
			}

			for _, fieldSetValues := range test.FieldsSetValues {
				fieldValue := configMockElemValue.FieldByName(fieldSetValues.FieldName).Interface()
				if fieldValue != fieldSetValues.FieldValue {
					t.Errorf("unexpected result - expected: %v, actual: %v", fieldSetValues.FieldValue, fieldValue)
				}
			}
		})
	}
}

func TestMergeSetFlagSliceValues(t *testing.T) {
	tests := []struct {
		Name            string
		FieldsSetValues []FieldSetValues
	}{
		{Name: "string slice field single value", FieldsSetValues: []FieldSetValues{{SetValues: []string{"string-slice-field=test"}, FieldName: "StringSliceField", FieldValue: []string{"test"}}}},
		{Name: "int slice field single value", FieldsSetValues: []FieldSetValues{{SetValues: []string{"int-slice-field=6"}, FieldName: "IntSliceField", FieldValue: []int{6}}}},
		{Name: "bool slice field single value", FieldsSetValues: []FieldSetValues{{SetValues: []string{"bool-slice-field=true"}, FieldName: "BoolSliceField", FieldValue: []bool{true}}}},
		{Name: "uint slice field single value", FieldsSetValues: []FieldSetValues{{SetValues: []string{"uint-slice-field=6"}, FieldName: "UintSliceField", FieldValue: []uint{uint(6)}}}},
		{Name: "four single value fields combined", FieldsSetValues: []FieldSetValues{
			{SetValues: []string{"string-slice-field=test"}, FieldName: "StringSliceField", FieldValue: []string{"test"}},
			{SetValues: []string{"int-slice-field=6"}, FieldName: "IntSliceField", FieldValue: []int{6}},
			{SetValues: []string{"bool-slice-field=true"}, FieldName: "BoolSliceField", FieldValue: []bool{true}},
			{SetValues: []string{"uint-slice-field=6"}, FieldName: "UintSliceField", FieldValue: []uint{uint(6)}},
		}},
		{Name: "string slice field two values", FieldsSetValues: []FieldSetValues{{SetValues: []string{"string-slice-field=test", "string-slice-field=test2"}, FieldName: "StringSliceField", FieldValue: []string{"test", "test2"}}}},
		{Name: "int slice field two values", FieldsSetValues: []FieldSetValues{{SetValues: []string{"int-slice-field=6", "int-slice-field=66"}, FieldName: "IntSliceField", FieldValue: []int{6, 66}}}},
		{Name: "bool slice field two values", FieldsSetValues: []FieldSetValues{{SetValues: []string{"bool-slice-field=true", "bool-slice-field=false"}, FieldName: "BoolSliceField", FieldValue: []bool{true, false}}}},
		{Name: "uint slice field two values", FieldsSetValues: []FieldSetValues{{SetValues: []string{"uint-slice-field=6", "uint-slice-field=66"}, FieldName: "UintSliceField", FieldValue: []uint{uint(6), uint(66)}}}},
		{Name: "four two values fields combined", FieldsSetValues: []FieldSetValues{
			{SetValues: []string{"string-slice-field=test", "string-slice-field=test2"}, FieldName: "StringSliceField", FieldValue: []string{"test", "test2"}},
			{SetValues: []string{"int-slice-field=6", "int-slice-field=66"}, FieldName: "IntSliceField", FieldValue: []int{6, 66}},
			{SetValues: []string{"bool-slice-field=true", "bool-slice-field=false"}, FieldName: "BoolSliceField", FieldValue: []bool{true, false}},
			{SetValues: []string{"uint-slice-field=6", "uint-slice-field=66"}, FieldName: "UintSliceField", FieldValue: []uint{uint(6), uint(66)}},
		}},
	}

	for _, test := range tests {
		t.Run(test.Name, func(t *testing.T) {
			configMock := ConfigMock{}
			configMockElemValue := reflect.ValueOf(&configMock).Elem()

			var setValues []string
			for _, fieldSetValues := range test.FieldsSetValues {
				setValues = append(setValues, fieldSetValues.SetValues...)
			}

			err := mergeSetFlag(configMockElemValue, setValues)

			if err != nil {
				t.Errorf("unexpected error result - err: %v", err)
				return
			}

			for _, fieldSetValues := range test.FieldsSetValues {
				fieldValue := configMockElemValue.FieldByName(fieldSetValues.FieldName).Interface()
				if !reflect.DeepEqual(fieldValue, fieldSetValues.FieldValue) {
					t.Errorf("unexpected result - expected: %v, actual: %v", fieldSetValues.FieldValue, fieldValue)
				}
			}
		})
	}
}

func TestMergeSetFlagMixValues(t *testing.T) {
	tests := []struct {
		Name            string
		FieldsSetValues []FieldSetValues
	}{
		{Name: "single value all fields", FieldsSetValues: []FieldSetValues{
			{SetValues: []string{"string-slice-field=test"}, FieldName: "StringSliceField", FieldValue: []string{"test"}},
			{SetValues: []string{"int-slice-field=6"}, FieldName: "IntSliceField", FieldValue: []int{6}},
			{SetValues: []string{"bool-slice-field=true"}, FieldName: "BoolSliceField", FieldValue: []bool{true}},
			{SetValues: []string{"uint-slice-field=6"}, FieldName: "UintSliceField", FieldValue: []uint{uint(6)}},
			{SetValues: []string{"string-field=test"}, FieldName: "StringField", FieldValue: "test"},
			{SetValues: []string{"int-field=6"}, FieldName: "IntField", FieldValue: 6},
			{SetValues: []string{"bool-field=true"}, FieldName: "BoolField", FieldValue: true},
			{SetValues: []string{"uint-field=6"}, FieldName: "UintField", FieldValue: uint(6)},
		}},
		{Name: "two values slice fields and single value fields", FieldsSetValues: []FieldSetValues{
			{SetValues: []string{"string-slice-field=test", "string-slice-field=test2"}, FieldName: "StringSliceField", FieldValue: []string{"test", "test2"}},
			{SetValues: []string{"int-slice-field=6", "int-slice-field=66"}, FieldName: "IntSliceField", FieldValue: []int{6, 66}},
			{SetValues: []string{"bool-slice-field=true", "bool-slice-field=false"}, FieldName: "BoolSliceField", FieldValue: []bool{true, false}},
			{SetValues: []string{"uint-slice-field=6", "uint-slice-field=66"}, FieldName: "UintSliceField", FieldValue: []uint{uint(6), uint(66)}},
			{SetValues: []string{"string-field=test"}, FieldName: "StringField", FieldValue: "test"},
			{SetValues: []string{"int-field=6"}, FieldName: "IntField", FieldValue: 6},
			{SetValues: []string{"bool-field=true"}, FieldName: "BoolField", FieldValue: true},
			{SetValues: []string{"uint-field=6"}, FieldName: "UintField", FieldValue: uint(6)},
		}},
	}

	for _, test := range tests {
		t.Run(test.Name, func(t *testing.T) {
			configMock := ConfigMock{}
			configMockElemValue := reflect.ValueOf(&configMock).Elem()

			var setValues []string
			for _, fieldSetValues := range test.FieldsSetValues {
				setValues = append(setValues, fieldSetValues.SetValues...)
			}

			err := mergeSetFlag(configMockElemValue, setValues)

			if err != nil {
				t.Errorf("unexpected error result - err: %v", err)
				return
			}

			for _, fieldSetValues := range test.FieldsSetValues {
				fieldValue := configMockElemValue.FieldByName(fieldSetValues.FieldName).Interface()
				if !reflect.DeepEqual(fieldValue, fieldSetValues.FieldValue) {
					t.Errorf("unexpected result - expected: %v, actual: %v", fieldSetValues.FieldValue, fieldValue)
				}
			}
		})
	}
}

func TestGetParsedValueValidValue(t *testing.T) {
	tests := []struct {
		StringValue string
		Kind        reflect.Kind
		ActualValue interface{}
	}{
		{StringValue: "test", Kind: reflect.String, ActualValue: "test"},
		{StringValue: "123", Kind: reflect.String, ActualValue: "123"},
		{StringValue: "true", Kind: reflect.Bool, ActualValue: true},
		{StringValue: "false", Kind: reflect.Bool, ActualValue: false},
		{StringValue: "6", Kind: reflect.Int, ActualValue: 6},
		{StringValue: "-6", Kind: reflect.Int, ActualValue: -6},
		{StringValue: "6", Kind: reflect.Int8, ActualValue: int8(6)},
		{StringValue: "-6", Kind: reflect.Int8, ActualValue: int8(-6)},
		{StringValue: "6", Kind: reflect.Int16, ActualValue: int16(6)},
		{StringValue: "-6", Kind: reflect.Int16, ActualValue: int16(-6)},
		{StringValue: "6", Kind: reflect.Int32, ActualValue: int32(6)},
		{StringValue: "-6", Kind: reflect.Int32, ActualValue: int32(-6)},
		{StringValue: "6", Kind: reflect.Int64, ActualValue: int64(6)},
		{StringValue: "-6", Kind: reflect.Int64, ActualValue: int64(-6)},
		{StringValue: "6", Kind: reflect.Uint, ActualValue: uint(6)},
		{StringValue: "66", Kind: reflect.Uint, ActualValue: uint(66)},
		{StringValue: "6", Kind: reflect.Uint8, ActualValue: uint8(6)},
		{StringValue: "66", Kind: reflect.Uint8, ActualValue: uint8(66)},
		{StringValue: "6", Kind: reflect.Uint16, ActualValue: uint16(6)},
		{StringValue: "66", Kind: reflect.Uint16, ActualValue: uint16(66)},
		{StringValue: "6", Kind: reflect.Uint32, ActualValue: uint32(6)},
		{StringValue: "66", Kind: reflect.Uint32, ActualValue: uint32(66)},
		{StringValue: "6", Kind: reflect.Uint64, ActualValue: uint64(6)},
		{StringValue: "66", Kind: reflect.Uint64, ActualValue: uint64(66)},
	}

	for _, test := range tests {
		t.Run(fmt.Sprintf("%v %v", test.Kind, test.StringValue), func(t *testing.T) {
			parsedValue, err := getParsedValue(test.Kind, test.StringValue)

			if err != nil {
				t.Errorf("unexpected error result - err: %v", err)
				return
			}

			if parsedValue.Interface() != test.ActualValue {
				t.Errorf("unexpected result - expected: %v, actual: %v", test.ActualValue, parsedValue)
			}
		})
	}
}

func TestGetParsedValueInvalidValue(t *testing.T) {
	tests := []struct {
		StringValue string
		Kind        reflect.Kind
	}{
		{StringValue: "test", Kind: reflect.Bool},
		{StringValue: "123", Kind: reflect.Bool},
		{StringValue: "test", Kind: reflect.Int},
		{StringValue: "true", Kind: reflect.Int},
		{StringValue: "test", Kind: reflect.Int8},
		{StringValue: "true", Kind: reflect.Int8},
		{StringValue: "test", Kind: reflect.Int16},
		{StringValue: "true", Kind: reflect.Int16},
		{StringValue: "test", Kind: reflect.Int32},
		{StringValue: "true", Kind: reflect.Int32},
		{StringValue: "test", Kind: reflect.Int64},
		{StringValue: "true", Kind: reflect.Int64},
		{StringValue: "test", Kind: reflect.Uint},
		{StringValue: "-6", Kind: reflect.Uint},
		{StringValue: "test", Kind: reflect.Uint8},
		{StringValue: "-6", Kind: reflect.Uint8},
		{StringValue: "test", Kind: reflect.Uint16},
		{StringValue: "-6", Kind: reflect.Uint16},
		{StringValue: "test", Kind: reflect.Uint32},
		{StringValue: "-6", Kind: reflect.Uint32},
		{StringValue: "test", Kind: reflect.Uint64},
		{StringValue: "-6", Kind: reflect.Uint64},
	}

	for _, test := range tests {
		t.Run(fmt.Sprintf("%v %v", test.Kind, test.StringValue), func(t *testing.T) {
			parsedValue, err := getParsedValue(test.Kind, test.StringValue)

			if err == nil {
				t.Errorf("unexpected unhandled error - stringValue: %v, Kind: %v", test.StringValue, test.Kind)
				return
			}

			if parsedValue != reflect.ValueOf(nil) {
				t.Errorf("unexpected parsed value - parsedValue: %v", parsedValue)
			}
		})
	}
}
07070100000023000041ED000000000000000000000002689B9CB300000000000000000000000000000000000000000000001E00000000kubeshark-cli-52.8.1/debounce07070100000024000081A4000000000000000000000001689B9CB3000003BC000000000000000000000000000000000000002A00000000kubeshark-cli-52.8.1/debounce/debounce.gopackage debounce

import (
	"fmt"
	"time"
)

func NewDebouncer(timeout time.Duration, callback func()) *Debouncer {
	var debouncer Debouncer
	debouncer.setTimeout(timeout)
	debouncer.setCallback(callback)
	return &debouncer
}

type Debouncer struct {
	callback func()
	running  bool
	canceled bool
	timeout  time.Duration
	timer    *time.Timer
}

func (d *Debouncer) setTimeout(timeout time.Duration) {
	// TODO: Return err if d.running
	d.timeout = timeout
}

func (d *Debouncer) setCallback(callback func()) {
	callbackWrapped := func() {
		if !d.canceled {
			callback()
		}
		d.running = false
	}

	d.callback = callbackWrapped
}

func (d *Debouncer) Cancel() {
	d.canceled = true
}

func (d *Debouncer) SetOn() error {
	if d.canceled {
		return fmt.Errorf("debouncer cancelled")
	}
	if d.running {
		return nil
	}

	d.running = true
	d.timer = time.AfterFunc(d.timeout, d.callback)
	return nil
}

func (d *Debouncer) IsOn() bool {
	return d.running
}
07070100000025000041ED000000000000000000000002689B9CB300000000000000000000000000000000000000000000002200000000kubeshark-cli-52.8.1/errormessage07070100000026000081A4000000000000000000000001689B9CB3000004F2000000000000000000000000000000000000003200000000kubeshark-cli-52.8.1/errormessage/errormessage.gopackage errormessage

import (
	"errors"
	"fmt"
	regexpsyntax "regexp/syntax"

	"github.com/kubeshark/kubeshark/config"
	"github.com/kubeshark/kubeshark/config/configStructs"
	"github.com/kubeshark/kubeshark/misc"

	k8serrors "k8s.io/apimachinery/pkg/api/errors"
)

// FormatError wraps error with a detailed message that is meant for the user.
// While the errors are meant to be displayed, they are not meant to be exported as classes outsite of CLI.
func FormatError(err error) error {
	var errorNew error
	if k8serrors.IsForbidden(err) {
		errorNew = fmt.Errorf("insufficient permissions: %w. "+
			"supply the required permission or control %s's access to namespaces by setting %s "+
			"in the config file or setting the targeted namespace with --%s %s=<NAMESPACE>",
			err,
			misc.Software,
			configStructs.ReleaseNamespaceLabel,
			config.SetCommandName,
			configStructs.ReleaseNamespaceLabel)
	} else if syntaxError, isSyntaxError := asRegexSyntaxError(err); isSyntaxError {
		errorNew = fmt.Errorf("regex %s is invalid: %w", syntaxError.Expr, err)
	} else {
		errorNew = err
	}

	return errorNew
}

func asRegexSyntaxError(err error) (*regexpsyntax.Error, bool) {
	var syntaxError *regexpsyntax.Error
	return syntaxError, errors.As(err, &syntaxError)
}
07070100000027000081A4000000000000000000000001689B9CB300001A5D000000000000000000000000000000000000001C00000000kubeshark-cli-52.8.1/go.modmodule github.com/kubeshark/kubeshark

go 1.24.0

toolchain go1.24.5

require (
	github.com/creasty/defaults v1.5.2
	github.com/fsnotify/fsnotify v1.7.0
	github.com/go-cmd/cmd v1.4.3
	github.com/goccy/go-yaml v1.11.2
	github.com/google/go-github/v37 v37.0.0
	github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674
	github.com/kubeshark/gopacket v1.1.39
	github.com/pkg/errors v0.9.1
	github.com/rivo/tview v0.0.0-20240818110301-fd649dbf1223
	github.com/robertkrimen/otto v0.2.1
	github.com/rs/zerolog v1.28.0
	github.com/spf13/cobra v1.9.1
	github.com/spf13/pflag v1.0.6
	github.com/tanqiangyes/grep-go v0.0.0-20220515134556-b36bff9c3d8e
	helm.sh/helm/v3 v3.18.4
	k8s.io/api v0.33.2
	k8s.io/apimachinery v0.33.2
	k8s.io/client-go v0.33.2
	k8s.io/kubectl v0.33.2
)

require (
	dario.cat/mergo v1.0.1 // indirect
	github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c // indirect
	github.com/BurntSushi/toml v1.5.0 // indirect
	github.com/MakeNowJust/heredoc v1.0.0 // indirect
	github.com/Masterminds/goutils v1.1.1 // indirect
	github.com/Masterminds/semver/v3 v3.3.0 // indirect
	github.com/Masterminds/sprig/v3 v3.3.0 // indirect
	github.com/Masterminds/squirrel v1.5.4 // indirect
	github.com/asaskevich/govalidator v0.0.0-20230301143203-a9d515a09cc2 // indirect
	github.com/blang/semver/v4 v4.0.0 // indirect
	github.com/chai2010/gettext-go v1.0.2 // indirect
	github.com/containerd/containerd v1.7.27 // indirect
	github.com/containerd/errdefs v0.3.0 // indirect
	github.com/containerd/log v0.1.0 // indirect
	github.com/containerd/platforms v0.2.1 // indirect
	github.com/cyphar/filepath-securejoin v0.4.1 // indirect
	github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
	github.com/emicklei/go-restful/v3 v3.11.0 // indirect
	github.com/evanphx/json-patch v5.9.11+incompatible // indirect
	github.com/exponent-io/jsonpath v0.0.0-20210407135951-1de76d718b3f // indirect
	github.com/fatih/color v1.13.0 // indirect
	github.com/fxamacker/cbor/v2 v2.7.0 // indirect
	github.com/gdamore/encoding v1.0.0 // indirect
	github.com/gdamore/tcell/v2 v2.7.1 // indirect
	github.com/go-errors/errors v1.4.2 // indirect
	github.com/go-gorp/gorp/v3 v3.1.0 // indirect
	github.com/go-logr/logr v1.4.2 // indirect
	github.com/go-openapi/jsonpointer v0.21.0 // indirect
	github.com/go-openapi/jsonreference v0.20.2 // indirect
	github.com/go-openapi/swag v0.23.0 // indirect
	github.com/go-playground/validator/v10 v10.14.0 // indirect
	github.com/gobwas/glob v0.2.3 // indirect
	github.com/gogo/protobuf v1.3.2 // indirect
	github.com/google/btree v1.1.3 // indirect
	github.com/google/gnostic-models v0.6.9 // indirect
	github.com/google/go-cmp v0.7.0 // indirect
	github.com/google/go-querystring v1.1.0 // indirect
	github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 // indirect
	github.com/google/uuid v1.6.0 // indirect
	github.com/gosuri/uitable v0.0.4 // indirect
	github.com/gregjones/httpcache v0.0.0-20190611155906-901d90724c79 // indirect
	github.com/hashicorp/errwrap v1.1.0 // indirect
	github.com/hashicorp/go-multierror v1.1.1 // indirect
	github.com/huandu/xstrings v1.5.0 // indirect
	github.com/inconshreveable/mousetrap v1.1.0 // indirect
	github.com/jmoiron/sqlx v1.4.0 // indirect
	github.com/josharian/intern v1.0.0 // indirect
	github.com/json-iterator/go v1.1.12 // indirect
	github.com/klauspost/compress v1.18.0 // indirect
	github.com/kubeshark/tracerproto v1.0.0 // indirect
	github.com/lann/builder v0.0.0-20180802200727-47ae307949d0 // indirect
	github.com/lann/ps v0.0.0-20150810152359-62de8c46ede0 // indirect
	github.com/lib/pq v1.10.9 // indirect
	github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de // indirect
	github.com/lucasb-eyer/go-colorful v1.2.0 // indirect
	github.com/mailru/easyjson v0.7.7 // indirect
	github.com/mattn/go-colorable v0.1.13 // indirect
	github.com/mattn/go-isatty v0.0.19 // indirect
	github.com/mattn/go-runewidth v0.0.15 // indirect
	github.com/mitchellh/copystructure v1.2.0 // indirect
	github.com/mitchellh/go-wordwrap v1.0.1 // indirect
	github.com/mitchellh/reflectwalk v1.0.2 // indirect
	github.com/moby/spdystream v0.5.0 // indirect
	github.com/moby/term v0.5.2 // indirect
	github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
	github.com/modern-go/reflect2 v1.0.2 // indirect
	github.com/monochromegane/go-gitignore v0.0.0-20200626010858-205db1a8cc00 // indirect
	github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
	github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f // indirect
	github.com/opencontainers/go-digest v1.0.0 // indirect
	github.com/opencontainers/image-spec v1.1.1 // indirect
	github.com/peterbourgon/diskv v2.0.1+incompatible // indirect
	github.com/rivo/uniseg v0.4.7 // indirect
	github.com/rubenv/sql-migrate v1.8.0 // indirect
	github.com/russross/blackfriday/v2 v2.1.0 // indirect
	github.com/shopspring/decimal v1.4.0 // indirect
	github.com/sirupsen/logrus v1.9.3 // indirect
	github.com/spf13/cast v1.7.0 // indirect
	github.com/x448/float16 v0.8.4 // indirect
	github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb // indirect
	github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 // indirect
	github.com/xeipuuv/gojsonschema v1.2.0 // indirect
	github.com/xlab/treeprint v1.2.0 // indirect
	golang.org/x/crypto v0.39.0 // indirect
	golang.org/x/net v0.40.0 // indirect
	golang.org/x/oauth2 v0.28.0 // indirect
	golang.org/x/sync v0.15.0 // indirect
	golang.org/x/sys v0.33.0 // indirect
	golang.org/x/term v0.32.0 // indirect
	golang.org/x/text v0.26.0 // indirect
	golang.org/x/time v0.9.0 // indirect
	golang.org/x/xerrors v0.0.0-20220907171357-04be3eba64a2 // indirect
	google.golang.org/genproto/googleapis/rpc v0.0.0-20241209162323-e6fa225c2576 // indirect
	google.golang.org/grpc v1.68.1 // indirect
	google.golang.org/protobuf v1.36.5 // indirect
	gopkg.in/evanphx/json-patch.v4 v4.12.0 // indirect
	gopkg.in/inf.v0 v0.9.1 // indirect
	gopkg.in/sourcemap.v1 v1.0.5 // indirect
	gopkg.in/yaml.v3 v3.0.1 // indirect
	k8s.io/apiextensions-apiserver v0.33.2 // indirect
	k8s.io/apiserver v0.33.2 // indirect
	k8s.io/cli-runtime v0.33.2 // indirect
	k8s.io/component-base v0.33.2 // indirect
	k8s.io/klog/v2 v2.130.1 // indirect
	k8s.io/kube-openapi v0.0.0-20250318190949-c8a335a9a2ff // indirect
	k8s.io/utils v0.0.0-20241104100929-3ea5e8cea738 // indirect
	oras.land/oras-go/v2 v2.6.0 // indirect
	sigs.k8s.io/json v0.0.0-20241010143419-9aa6b5e7a4b3 // indirect
	sigs.k8s.io/kustomize/api v0.19.0 // indirect
	sigs.k8s.io/kustomize/kyaml v0.19.0 // indirect
	sigs.k8s.io/randfill v1.0.0 // indirect
	sigs.k8s.io/structured-merge-diff/v4 v4.6.0 // indirect
	sigs.k8s.io/yaml v1.4.0 // indirect
)
07070100000028000081A4000000000000000000000001689B9CB30000C6B3000000000000000000000000000000000000001C00000000kubeshark-cli-52.8.1/go.sumdario.cat/mergo v1.0.1 h1:Ra4+bf83h2ztPIQYNP99R6m+Y7KfnARDfID+a+vLl4s=
dario.cat/mergo v1.0.1/go.mod h1:uNxQE+84aUszobStD9th8a29P2fMDhsBdgRYvZOxGmk=
filippo.io/edwards25519 v1.1.0 h1:FNf4tywRC1HmFuKW5xopWpigGjJKiJSV0Cqo0cJWDaA=
filippo.io/edwards25519 v1.1.0/go.mod h1:BxyFTGdWcka3PhytdK4V28tE5sGfRvvvRV7EaN4VDT4=
github.com/AdaLogics/go-fuzz-headers v0.0.0-20230811130428-ced1acdcaa24 h1:bvDV9vkmnHYOMsOr4WLk+Vo07yKIzd94sVoIqshQ4bU=
github.com/AdaLogics/go-fuzz-headers v0.0.0-20230811130428-ced1acdcaa24/go.mod h1:8o94RPi1/7XTJvwPpRSzSUedZrtlirdB3r9Z20bi2f8=
github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c h1:udKWzYgxTojEKWjV8V+WSxDXJ4NFATAsZjh8iIbsQIg=
github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E=
github.com/BurntSushi/toml v1.5.0 h1:W5quZX/G/csjUnuI8SUYlsHs9M38FC7znL0lIO+DvMg=
github.com/BurntSushi/toml v1.5.0/go.mod h1:ukJfTF/6rtPPRCnwkur4qwRxa8vTRFBF0uk2lLoLwho=
github.com/DATA-DOG/go-sqlmock v1.5.2 h1:OcvFkGmslmlZibjAjaHm3L//6LiuBgolP7OputlJIzU=
github.com/DATA-DOG/go-sqlmock v1.5.2/go.mod h1:88MAG/4G7SMwSE3CeA0ZKzrT5CiOU3OJ+JlNzwDqpNU=
github.com/MakeNowJust/heredoc v1.0.0 h1:cXCdzVdstXyiTqTvfqk9SDHpKNjxuom+DOlyEeQ4pzQ=
github.com/MakeNowJust/heredoc v1.0.0/go.mod h1:mG5amYoWBHf8vpLOuehzbGGw0EHxpZZ6lCpQ4fNJ8LE=
github.com/Masterminds/goutils v1.1.1 h1:5nUrii3FMTL5diU80unEVvNevw1nH4+ZV4DSLVJLSYI=
github.com/Masterminds/goutils v1.1.1/go.mod h1:8cTjp+g8YejhMuvIA5y2vz3BpJxksy863GQaJW2MFNU=
github.com/Masterminds/semver/v3 v3.3.0 h1:B8LGeaivUe71a5qox1ICM/JLl0NqZSW5CHyL+hmvYS0=
github.com/Masterminds/semver/v3 v3.3.0/go.mod h1:4V+yj/TJE1HU9XfppCwVMZq3I84lprf4nC11bSS5beM=
github.com/Masterminds/sprig/v3 v3.3.0 h1:mQh0Yrg1XPo6vjYXgtf5OtijNAKJRNcTdOOGZe3tPhs=
github.com/Masterminds/sprig/v3 v3.3.0/go.mod h1:Zy1iXRYNqNLUolqCpL4uhk6SHUMAOSCzdgBfDb35Lz0=
github.com/Masterminds/squirrel v1.5.4 h1:uUcX/aBc8O7Fg9kaISIUsHXdKuqehiXAMQTYX8afzqM=
github.com/Masterminds/squirrel v1.5.4/go.mod h1:NNaOrjSoIDfDA40n7sr2tPNZRfjzjA400rg+riTZj10=
github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5 h1:0CwZNZbxp69SHPdPJAN/hZIm0C4OItdklCFmMRWYpio=
github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5/go.mod h1:wHh0iHkYZB8zMSxRWpUBQtwG5a7fFgvEO+odwuTv2gs=
github.com/asaskevich/govalidator v0.0.0-20230301143203-a9d515a09cc2 h1:DklsrG3dyBCFEj5IhUbnKptjxatkF07cF2ak3yi77so=
github.com/asaskevich/govalidator v0.0.0-20230301143203-a9d515a09cc2/go.mod h1:WaHUgvxTVq04UNunO+XhnAqY/wQc+bxr74GqbsZ/Jqw=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/blang/semver/v4 v4.0.0 h1:1PFHFE6yCCTv8C1TeyNNarDzntLi7wMI5i/pzqYIsAM=
github.com/blang/semver/v4 v4.0.0/go.mod h1:IbckMUScFkM3pff0VJDNKRiT6TG/YpiHIM2yvyW5YoQ=
github.com/bshuster-repo/logrus-logstash-hook v1.0.0 h1:e+C0SB5R1pu//O4MQ3f9cFuPGoOVeF2fE4Og9otCc70=
github.com/bshuster-repo/logrus-logstash-hook v1.0.0/go.mod h1:zsTqEiSzDgAa/8GZR7E1qaXrhYNDKBYy5/dWPTIflbk=
github.com/cenkalti/backoff/v4 v4.3.0 h1:MyRJ/UdXutAwSAT+s3wNd7MfTIcy71VQueUuFK343L8=
github.com/cenkalti/backoff/v4 v4.3.0/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyYozVcomhLiZE=
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/chai2010/gettext-go v1.0.2 h1:1Lwwip6Q2QGsAdl/ZKPCwTe9fe0CjlUbqj5bFNSjIRk=
github.com/chai2010/gettext-go v1.0.2/go.mod h1:y+wnP2cHYaVj19NZhYKAwEMH2CI1gNHeQQ+5AjwawxA=
github.com/containerd/containerd v1.7.27 h1:yFyEyojddO3MIGVER2xJLWoCIn+Up4GaHFquP7hsFII=
github.com/containerd/containerd v1.7.27/go.mod h1:xZmPnl75Vc+BLGt4MIfu6bp+fy03gdHAn9bz+FreFR0=
github.com/containerd/errdefs v0.3.0 h1:FSZgGOeK4yuT/+DnF07/Olde/q4KBoMsaamhXxIMDp4=
github.com/containerd/errdefs v0.3.0/go.mod h1:+YBYIdtsnF4Iw6nWZhJcqGSg/dwvV7tyJ/kCkyJ2k+M=
github.com/containerd/log v0.1.0 h1:TCJt7ioM2cr/tfR8GPbGf9/VRAX8D2B4PjzCpfX540I=
github.com/containerd/log v0.1.0/go.mod h1:VRRf09a7mHDIRezVKTRCrOq78v577GXq3bSa3EhrzVo=
github.com/containerd/platforms v0.2.1 h1:zvwtM3rz2YHPQsF2CHYM8+KtB5dvhISiXh5ZpSBQv6A=
github.com/containerd/platforms v0.2.1/go.mod h1:XHCb+2/hzowdiut9rkudds9bE5yJ7npe7dG/wG+uFPw=
github.com/coreos/go-systemd/v22 v22.3.3-0.20220203105225-a9a7ef127534/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/coreos/go-systemd/v22 v22.5.0 h1:RrqgGjYQKalulkV8NGVIfkXQf6YYmOyiJKk8iXXhfZs=
github.com/coreos/go-systemd/v22 v22.5.0/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/creack/pty v1.1.18 h1:n56/Zwd5o6whRC5PMGretI4IdRLlmBXYNjScPaBgsbY=
github.com/creack/pty v1.1.18/go.mod h1:MOBLtS5ELjhRRrroQr9kyvTxUAFNvYEK993ew/Vr4O4=
github.com/creasty/defaults v1.5.2 h1:/VfB6uxpyp6h0fr7SPp7n8WJBoV8jfxQXPCnkVSjyls=
github.com/creasty/defaults v1.5.2/go.mod h1:FPZ+Y0WNrbqOVw+c6av63eyHUAl6pMHZwqLPvXUZGfY=
github.com/cyphar/filepath-securejoin v0.4.1 h1:JyxxyPEaktOD+GAnqIqTf9A8tHyAG22rowi7HkoSU1s=
github.com/cyphar/filepath-securejoin v0.4.1/go.mod h1:Sdj7gXlvMcPZsbhwhQ33GguGLDGQL7h7bg04C/+u9jI=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78=
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc=
github.com/distribution/distribution/v3 v3.0.0 h1:q4R8wemdRQDClzoNNStftB2ZAfqOiN6UX90KJc4HjyM=
github.com/distribution/distribution/v3 v3.0.0/go.mod h1:tRNuFoZsUdyRVegq8xGNeds4KLjwLCRin/tTo6i1DhU=
github.com/distribution/reference v0.6.0 h1:0IXCQ5g4/QMHHkarYzh5l+u8T3t73zM5QvfrDyIgxBk=
github.com/distribution/reference v0.6.0/go.mod h1:BbU0aIcezP1/5jX/8MP0YiH4SdvB5Y4f/wlDRiLyi3E=
github.com/docker/docker-credential-helpers v0.8.2 h1:bX3YxiGzFP5sOXWc3bTPEXdEaZSeVMrFgOr3T+zrFAo=
github.com/docker/docker-credential-helpers v0.8.2/go.mod h1:P3ci7E3lwkZg6XiHdRKft1KckHiO9a2rNtyFbZ/ry9M=
github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c h1:+pKlWGMw7gf6bQ+oDZB4KHQFypsfjYlq/C4rfL7D3g8=
github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c/go.mod h1:Uw6UezgYA44ePAFQYUehOuCzmy5zmg/+nl2ZfMWGkpA=
github.com/docker/go-metrics v0.0.1 h1:AgB/0SvBxihN0X8OR4SjsblXkbMvalQ8cjmtKQ2rQV8=
github.com/docker/go-metrics v0.0.1/go.mod h1:cG1hvH2utMXtqgqqYE9plW6lDxS3/5ayHzueweSI3Vw=
github.com/emicklei/go-restful/v3 v3.11.0 h1:rAQeMHw1c7zTmncogyy8VvRZwtkmkZ4FxERmMY4rD+g=
github.com/emicklei/go-restful/v3 v3.11.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc=
github.com/evanphx/json-patch v5.9.11+incompatible h1:ixHHqfcGvxhWkniF1tWxBHA0yb4Z+d1UQi45df52xW8=
github.com/evanphx/json-patch v5.9.11+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/exponent-io/jsonpath v0.0.0-20210407135951-1de76d718b3f h1:Wl78ApPPB2Wvf/TIe2xdyJxTlb6obmF18d8QdkxNDu4=
github.com/exponent-io/jsonpath v0.0.0-20210407135951-1de76d718b3f/go.mod h1:OSYXu++VVOHnXeitef/D8n/6y4QV8uLHSFXX4NeXMGc=
github.com/fatih/color v1.13.0 h1:8LOYc1KYPPmyKMuN8QV2DNRWNbLo6LZ0iLs8+mlH53w=
github.com/fatih/color v1.13.0/go.mod h1:kLAiJbzzSOZDVNGyDpeOxJ47H46qBXwg5ILebYFFOfk=
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
github.com/foxcpp/go-mockdns v1.1.0 h1:jI0rD8M0wuYAxL7r/ynTrCQQq0BVqfB99Vgk7DlmewI=
github.com/foxcpp/go-mockdns v1.1.0/go.mod h1:IhLeSFGed3mJIAXPH2aiRQB+kqz7oqu8ld2qVbOu7Wk=
github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8=
github.com/frankban/quicktest v1.14.6/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0=
github.com/fsnotify/fsnotify v1.7.0 h1:8JEhPFa5W2WU7YfeZzPNqzMP6Lwt7L2715Ggo0nosvA=
github.com/fsnotify/fsnotify v1.7.0/go.mod h1:40Bi/Hjc2AVfZrqy+aj+yEI+/bRxZnMJyTJwOpGvigM=
github.com/fxamacker/cbor/v2 v2.7.0 h1:iM5WgngdRBanHcxugY4JySA0nk1wZorNOpTgCMedv5E=
github.com/fxamacker/cbor/v2 v2.7.0/go.mod h1:pxXPTn3joSm21Gbwsv0w9OSA2y1HFR9qXEeXQVeNoDQ=
github.com/gabriel-vasile/mimetype v1.4.2 h1:w5qFW6JKBz9Y393Y4q372O9A7cUSequkh1Q7OhCmWKU=
github.com/gabriel-vasile/mimetype v1.4.2/go.mod h1:zApsH/mKG4w07erKIaJPFiX0Tsq9BFQgN3qGY5GnNgA=
github.com/gdamore/encoding v1.0.0 h1:+7OoQ1Bc6eTm5niUzBa0Ctsh6JbMW6Ra+YNuAtDBdko=
github.com/gdamore/encoding v1.0.0/go.mod h1:alR0ol34c49FCSBLjhosxzcPHQbf2trDkoo5dl+VrEg=
github.com/gdamore/tcell/v2 v2.7.1 h1:TiCcmpWHiAU7F0rA2I3S2Y4mmLmO9KHxJ7E1QhYzQbc=
github.com/gdamore/tcell/v2 v2.7.1/go.mod h1:dSXtXTSK0VsW1biw65DZLZ2NKr7j0qP/0J7ONmsraWg=
github.com/go-cmd/cmd v1.4.3 h1:6y3G+3UqPerXvPcXvj+5QNPHT02BUw7p6PsqRxLNA7Y=
github.com/go-cmd/cmd v1.4.3/go.mod h1:u3hxg/ry+D5kwh8WvUkHLAMe2zQCaXd00t35WfQaOFk=
github.com/go-errors/errors v1.4.2 h1:J6MZopCL4uSllY1OfXM374weqZFFItUbrImctkmUxIA=
github.com/go-errors/errors v1.4.2/go.mod h1:sIVyrIiJhuEF+Pj9Ebtd6P/rEYROXFi3BopGUQ5a5Og=
github.com/go-gorp/gorp/v3 v3.1.0 h1:ItKF/Vbuj31dmV4jxA1qblpSwkl9g1typ24xoe70IGs=
github.com/go-gorp/gorp/v3 v3.1.0/go.mod h1:dLEjIyyRNiXvNZ8PSmzpt1GsWAUK8kjVhEpjH8TixEw=
github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY=
github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
github.com/go-openapi/jsonpointer v0.19.6/go.mod h1:osyAmYz/mB/C3I+WsTTSgw1ONzaLJoLCyoi6/zppojs=
github.com/go-openapi/jsonpointer v0.21.0 h1:YgdVicSA9vH5RiHs9TZW5oyafXZFc6+2Vc1rr/O9oNQ=
github.com/go-openapi/jsonpointer v0.21.0/go.mod h1:IUyH9l/+uyhIYQ/PXVA41Rexl+kOkAPDdXEYns6fzUY=
github.com/go-openapi/jsonreference v0.20.2 h1:3sVjiK66+uXK/6oQ8xgcRKcFgQ5KXa2KvnJRumpMGbE=
github.com/go-openapi/jsonreference v0.20.2/go.mod h1:Bl1zwGIM8/wsvqjsOQLJ/SH+En5Ap4rVB5KVcIDZG2k=
github.com/go-openapi/swag v0.22.3/go.mod h1:UzaqsxGiab7freDnrUUra0MwWfN/q7tE4j+VcZ0yl14=
github.com/go-openapi/swag v0.23.0 h1:vsEVJDUo2hPJ2tu0/Xc+4noaxyEffXNIs3cOULZ+GrE=
github.com/go-openapi/swag v0.23.0/go.mod h1:esZ8ITTYEsH1V2trKHjAN8Ai7xHb8RV+YSZ577vPjgQ=
github.com/go-playground/locales v0.14.1 h1:EWaQ/wswjilfKLTECiXz7Rh+3BjFhfDFKv/oXslEjJA=
github.com/go-playground/locales v0.14.1/go.mod h1:hxrqLVvrK65+Rwrd5Fc6F2O76J/NuW9t0sjnWqG1slY=
github.com/go-playground/universal-translator v0.18.1 h1:Bcnm0ZwsGyWbCzImXv+pAJnYK9S473LQFuzCbDbfSFY=
github.com/go-playground/universal-translator v0.18.1/go.mod h1:xekY+UJKNuX9WP91TpwSH2VMlDf28Uj24BCp08ZFTUY=
github.com/go-playground/validator/v10 v10.14.0 h1:vgvQWe3XCz3gIeFDm/HnTIbj6UGmg/+t63MyGU2n5js=
github.com/go-playground/validator/v10 v10.14.0/go.mod h1:9iXMNT7sEkjXb0I+enO7QXmzG6QCsPWY4zveKFVRSyU=
github.com/go-sql-driver/mysql v1.8.1 h1:LedoTUt/eveggdHS9qUFC1EFSa8bU2+1pZjSRpvNJ1Y=
github.com/go-sql-driver/mysql v1.8.1/go.mod h1:wEBSXgmK//2ZFJyE+qWnIsVGmvmEKlqwuVSjsCm7DZg=
github.com/go-task/slim-sprig/v3 v3.0.0 h1:sUs3vkvUymDpBKi3qH1YSqBQk9+9D/8M2mN1vB6EwHI=
github.com/go-task/slim-sprig/v3 v3.0.0/go.mod h1:W848ghGpv3Qj3dhTPRyJypKRiqCdHZiAzKg9hl15HA8=
github.com/go-test/deep v1.1.0 h1:WOcxcdHcvdgThNXjw0t76K42FXTU7HpNQWHpA2HHNlg=
github.com/go-test/deep v1.1.0/go.mod h1:5C2ZWiW0ErCdrYzpqxLbTX7MG14M9iiw8DgHncVwcsE=
github.com/gobwas/glob v0.2.3 h1:A4xDbljILXROh+kObIiy5kIaPYD8e96x1tgBhUI5J+Y=
github.com/gobwas/glob v0.2.3/go.mod h1:d3Ez4x06l9bZtSvzIay5+Yzi0fmZzPgnTbPcKjJAkT8=
github.com/goccy/go-yaml v1.11.2 h1:joq77SxuyIs9zzxEjgyLBugMQ9NEgTWxXfz2wVqwAaQ=
github.com/goccy/go-yaml v1.11.2/go.mod h1:wKnAMd44+9JAAnGQpWVEgBzGt3YuTaQ4uXoHvE4m7WU=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
github.com/google/btree v1.1.3 h1:CVpQJjYgC4VbzxeGVHfvZrv1ctoYCAI8vbl07Fcxlyg=
github.com/google/btree v1.1.3/go.mod h1:qOPhT0dTNdNzV6Z/lhRX0YXUafgPLFUh+gZMl761Gm4=
github.com/google/gnostic-models v0.6.9 h1:MU/8wDLif2qCXZmzncUQ/BOfxWfthHi63KqpoNbWqVw=
github.com/google/gnostic-models v0.6.9/go.mod h1:CiWsm0s6BSQd1hRn8/QmxqB6BesYcbSZxsz9b0KuDBw=
github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/google/go-github/v37 v37.0.0 h1:rCspN8/6kB1BAJWZfuafvHhyfIo5fkAulaP/3bOQ/tM=
github.com/google/go-github/v37 v37.0.0/go.mod h1:LM7in3NmXDrX58GbEHy7FtNLbI2JijX93RnMKvWG3m4=
github.com/google/go-querystring v1.0.0/go.mod h1:odCYkC5MyYFN7vkCjXpyrEuKhc/BUO6wN/zVPAxq5ck=
github.com/google/go-querystring v1.1.0 h1:AnCroh3fv4ZBgVIf1Iwtovgjaw/GiKJo8M8yD/fhyJ8=
github.com/google/go-querystring v1.1.0/go.mod h1:Kcdr2DB4koayq7X8pmAG4sNG59So17icRSOU623lUBU=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/pprof v0.0.0-20241029153458-d1b30febd7db h1:097atOisP2aRj7vFgYQBbFN4U4JNXUNYpxael3UzMyo=
github.com/google/pprof v0.0.0-20241029153458-d1b30febd7db/go.mod h1:vavhavw2zAxS5dIdcRluK6cSGGPlZynqzFM8NdvU144=
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 h1:El6M4kTTCOh6aBiKaUGG7oYTSPP8MxqL4YI3kZKwcP4=
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510/go.mod h1:pupxD2MaaD3pAXIBCelhxNneeOaAeabZDe5s4K6zSpQ=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/gorilla/handlers v1.5.2 h1:cLTUSsNkgcwhgRqvCNmdbRWG0A3N4F+M2nWKdScwyEE=
github.com/gorilla/handlers v1.5.2/go.mod h1:dX+xVpaxdSw+q0Qek8SSsl3dfMk3jNddUkMzo0GtH0w=
github.com/gorilla/mux v1.8.1 h1:TuBL49tXwgrFYWhqrNgrUNEY92u81SPhu7sTdzQEiWY=
github.com/gorilla/mux v1.8.1/go.mod h1:AKf9I4AEqPTmMytcMc0KkNouC66V3BtZ4qD5fmWSiMQ=
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674 h1:JeSE6pjso5THxAzdVpqr6/geYxZytqFMBCOtn/ujyeo=
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674/go.mod h1:r4w70xmWCQKmi1ONH4KIaBptdivuRPyosB9RmPlGEwA=
github.com/gosuri/uitable v0.0.4 h1:IG2xLKRvErL3uhY6e1BylFzG+aJiwQviDDTfOKeKTpY=
github.com/gosuri/uitable v0.0.4/go.mod h1:tKR86bXuXPZazfOTG1FIzvjIdXzd0mo4Vtn16vt0PJo=
github.com/gregjones/httpcache v0.0.0-20190611155906-901d90724c79 h1:+ngKgrYPPJrOjhax5N+uePQ0Fh1Z7PheYoUI/0nzkPA=
github.com/gregjones/httpcache v0.0.0-20190611155906-901d90724c79/go.mod h1:FecbI9+v66THATjSRHfNgh1IVFe/9kFxbXtjV0ctIMA=
github.com/grpc-ecosystem/grpc-gateway v1.16.0 h1:gmcG1KaJ57LophUzW0Hy8NmPhnMZb4M0+kPpLofRdBo=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.24.0 h1:TmHmbvxPmaegwhDubVz0lICL0J5Ka2vwTzhoePEXsGE=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.24.0/go.mod h1:qztMSjm835F2bXf+5HKAPIS5qsmQDqZna/PgVt4rWtI=
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/errwrap v1.1.0 h1:OxrOeh75EUXMY8TBjag2fzXGZ40LB6IKw45YeGUDY2I=
github.com/hashicorp/errwrap v1.1.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo=
github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM=
github.com/hashicorp/golang-lru/arc/v2 v2.0.5 h1:l2zaLDubNhW4XO3LnliVj0GXO3+/CGNJAg1dcN2Fpfw=
github.com/hashicorp/golang-lru/arc/v2 v2.0.5/go.mod h1:ny6zBSQZi2JxIeYcv7kt2sH2PXJtirBN7RDhRpxPkxU=
github.com/hashicorp/golang-lru/v2 v2.0.5 h1:wW7h1TG88eUIJ2i69gaE3uNVtEPIagzhGvHgwfx2Vm4=
github.com/hashicorp/golang-lru/v2 v2.0.5/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM=
github.com/huandu/xstrings v1.5.0 h1:2ag3IFq9ZDANvthTwTiqSSZLjDc+BedvHPAp5tJy2TI=
github.com/huandu/xstrings v1.5.0/go.mod h1:y5/lhBue+AyNmUVz9RLU9xbLR0o4KIIExikq4ovT0aE=
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
github.com/jmoiron/sqlx v1.4.0 h1:1PLqN7S1UYp5t4SrVVnt4nUVNemrDAtxlulVe+Qgm3o=
github.com/jmoiron/sqlx v1.4.0/go.mod h1:ZrZ7UsYB/weZdl2Bxg6jCRO9c3YHl8r3ahlKmRT4JLY=
github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY=
github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y=
github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo=
github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ=
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/kubeshark/gopacket v1.1.39 h1:NNiMTPO8v2+5FVlJTulT0Z+O0TLEAzavJBto10AY7js=
github.com/kubeshark/gopacket v1.1.39/go.mod h1:Qo8/i/tdT74CCT7/pjO0L55Pktv5dQfj7M/Arv8MKm8=
github.com/kubeshark/tracerproto v1.0.0 h1:/euPX9KMrKDS92hSMrLuhncYAX22dYlsnM2aD4AYhhE=
github.com/kubeshark/tracerproto v1.0.0/go.mod h1:+efDYkwXxwakmHRpxHVEekyXNtg/aFx0uSo/I0lGV9k=
github.com/lann/builder v0.0.0-20180802200727-47ae307949d0 h1:SOEGU9fKiNWd/HOJuq6+3iTQz8KNCLtVX6idSoTLdUw=
github.com/lann/builder v0.0.0-20180802200727-47ae307949d0/go.mod h1:dXGbAdH5GtBTC4WfIxhKZfyBF/HBFgRZSWwZ9g/He9o=
github.com/lann/ps v0.0.0-20150810152359-62de8c46ede0 h1:P6pPBnrTSX3DEVR4fDembhRWSsG5rVo6hYhAB/ADZrk=
github.com/lann/ps v0.0.0-20150810152359-62de8c46ede0/go.mod h1:vmVJ0l/dxyfGW6FmdpVm2joNMFikkuWg0EoCKLGUMNw=
github.com/leodido/go-urn v1.2.4 h1:XlAE/cm/ms7TE/VMVoduSpNBoyc2dOxHs5MZSwAN63Q=
github.com/leodido/go-urn v1.2.4/go.mod h1:7ZrI8mTSeBSHl/UaRyKQW1qZeMgak41ANeCNaVckg+4=
github.com/lib/pq v1.10.9 h1:YXG7RB+JIjhP29X+OtkiDnYaXQwpS4JEWq7dtCCRUEw=
github.com/lib/pq v1.10.9/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de h1:9TO3cAIGXtEhnIaL+V+BEER86oLrvS+kWobKpbJuye0=
github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de/go.mod h1:zAbeS9B/r2mtpb6U+EI2rYA5OAXxsYw6wTamcNW+zcE=
github.com/lucasb-eyer/go-colorful v1.2.0 h1:1nnpGOrhyZZuNyfu1QjKiUICQ74+3FNCN69Aj6K7nkY=
github.com/lucasb-eyer/go-colorful v1.2.0/go.mod h1:R4dSotOR9KMtayYi1e77YzuveK+i7ruzyGqttikkLy0=
github.com/mailru/easyjson v0.7.7 h1:UGYAvKxe3sBsEDzO8ZeWOSlIQfWFlxbzLZe7hwFURr0=
github.com/mailru/easyjson v0.7.7/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc=
github.com/mattn/go-colorable v0.1.9/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
github.com/mattn/go-colorable v0.1.12/go.mod h1:u5H1YNBxpqRaxsYJYSkiCWKzEfiAb1Gb520KVy5xxl4=
github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA=
github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg=
github.com/mattn/go-isatty v0.0.12/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Kysco4FUpU=
github.com/mattn/go-isatty v0.0.14/go.mod h1:7GGIvUiUoEMVVmxf/4nioHXj79iQHKdU27kJ6hsGG94=
github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
github.com/mattn/go-isatty v0.0.19 h1:JITubQf0MOLdlGRuRq+jtsDlekdYPia9ZFsB8h/APPA=
github.com/mattn/go-isatty v0.0.19/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-runewidth v0.0.15 h1:UNAjwbU9l54TA3KzvqLGxwWjHmMgBUVhBiTjelZgg3U=
github.com/mattn/go-runewidth v0.0.15/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w=
github.com/mattn/go-sqlite3 v1.14.22 h1:2gZY6PC6kBnID23Tichd1K+Z0oS6nE/XwU+Vz/5o4kU=
github.com/mattn/go-sqlite3 v1.14.22/go.mod h1:Uh1q+B4BYcTPb+yiD3kU8Ct7aC0hY9fxUwlHK0RXw+Y=
github.com/miekg/dns v1.1.57 h1:Jzi7ApEIzwEPLHWRcafCN9LZSBbqQpxjt/wpgvg7wcM=
github.com/miekg/dns v1.1.57/go.mod h1:uqRjCRUuEAA6qsOiJvDd+CFo/vW+y5WR6SNmHE55hZk=
github.com/mitchellh/copystructure v1.2.0 h1:vpKXTN4ewci03Vljg/q9QvCGUDttBOGBIa15WveJJGw=
github.com/mitchellh/copystructure v1.2.0/go.mod h1:qLl+cE2AmVv+CoeAwDPye/v+N2HKCj9FbZEVFJRxO9s=
github.com/mitchellh/go-wordwrap v1.0.1 h1:TLuKupo69TCn6TQSyGxwI1EblZZEsQ0vMlAFQflz0v0=
github.com/mitchellh/go-wordwrap v1.0.1/go.mod h1:R62XHJLzvMFRBbcrT7m7WgmE1eOyTSsCt+hzestvNj0=
github.com/mitchellh/reflectwalk v1.0.2 h1:G2LzWKi524PWgd3mLHV8Y5k7s6XUvT0Gef6zxSIeXaQ=
github.com/mitchellh/reflectwalk v1.0.2/go.mod h1:mSTlrgnPZtwu0c4WaC2kGObEpuNDbx0jmZXqmk4esnw=
github.com/moby/spdystream v0.5.0 h1:7r0J1Si3QO/kjRitvSLVVFUjxMEb/YLj6S9FF62JBCU=
github.com/moby/spdystream v0.5.0/go.mod h1:xBAYlnt/ay+11ShkdFKNAG7LsyK/tmNBVvVOwrfMgdI=
github.com/moby/term v0.5.2 h1:6qk3FJAFDs6i/q3W/pQ97SX192qKfZgGjCQqfCJkgzQ=
github.com/moby/term v0.5.2/go.mod h1:d3djjFCrjnB+fl8NJux+EJzu0msscUP+f8it8hPkFLc=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M=
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
github.com/monochromegane/go-gitignore v0.0.0-20200626010858-205db1a8cc00 h1:n6/2gBQ3RWajuToeY6ZtZTIKv2v7ThUy5KKusIT0yc0=
github.com/monochromegane/go-gitignore v0.0.0-20200626010858-205db1a8cc00/go.mod h1:Pm3mSP3c5uWn86xMLZ5Sa7JB9GsEZySvHYXCTK4E9q4=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f h1:y5//uYreIhSUg3J1GEMiLbxo1LJaP8RfCpH6pymGZus=
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f/go.mod h1:ZdcZmHo+o7JKHSa8/e818NopupXU1YMK5fe1lsApnBw=
github.com/onsi/ginkgo/v2 v2.21.0 h1:7rg/4f3rB88pb5obDgNZrNHrQ4e6WpjonchcpuBRnZM=
github.com/onsi/ginkgo/v2 v2.21.0/go.mod h1:7Du3c42kxCUegi0IImZ1wUQzMBVecgIHjR1C+NkhLQo=
github.com/onsi/gomega v1.35.1 h1:Cwbd75ZBPxFSuZ6T+rN/WCb/gOc6YgFBXLlZLhC7Ds4=
github.com/onsi/gomega v1.35.1/go.mod h1:PvZbdDc8J6XJEpDK4HCuRBm8a6Fzp9/DmhC9C7yFlog=
github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
github.com/opencontainers/image-spec v1.1.1 h1:y0fUlFfIZhPF1W537XOLg0/fcx6zcHCJwooC2xJA040=
github.com/opencontainers/image-spec v1.1.1/go.mod h1:qpqAh3Dmcf36wStyyWU+kCeDgrGnAve2nCC8+7h8Q0M=
github.com/peterbourgon/diskv v2.0.1+incompatible h1:UBdAOUP5p4RWqPBg048CAvpKN+vxiaj6gdUUzhl4XmI=
github.com/peterbourgon/diskv v2.0.1+incompatible/go.mod h1:uqqh8zWWbv1HBMNONnaR/tNboyR3/BZd58JJSHlUSCU=
github.com/phayes/freeport v0.0.0-20220201140144-74d24b5ae9f5 h1:Ii+DKncOVM8Cu1Hc+ETb5K+23HdAMvESYE3ZJ5b5cMI=
github.com/phayes/freeport v0.0.0-20220201140144-74d24b5ae9f5/go.mod h1:iIss55rKnNBTvrwdmkUpLnDpZoAHvWaiq5+iMmen4AE=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/poy/onpar v1.1.2 h1:QaNrNiZx0+Nar5dLgTVp5mXkyoVFIbepjyEoGSnhbAY=
github.com/poy/onpar v1.1.2/go.mod h1:6X8FLNoxyr9kkmnlqpK6LSoiOtrO6MICtWwEuWkLjzg=
github.com/prometheus/client_golang v1.22.0 h1:rb93p9lokFEsctTys46VnV1kLCDpVZ0a/Y92Vm0Zc6Q=
github.com/prometheus/client_golang v1.22.0/go.mod h1:R7ljNsLXhuQXYZYtw6GAE9AZg8Y7vEW5scdCXrWRXC0=
github.com/prometheus/client_model v0.6.1 h1:ZKSh/rekM+n3CeS952MLRAdFwIKqeY8b62p8ais2e9E=
github.com/prometheus/client_model v0.6.1/go.mod h1:OrxVMOVHjw3lKMa8+x6HeMGkHMQyHDk9E3jmP2AmGiY=
github.com/prometheus/common v0.62.0 h1:xasJaQlnWAeyHdUBeGjXmutelfJHWMRr+Fg4QszZ2Io=
github.com/prometheus/common v0.62.0/go.mod h1:vyBcEuLSvWos9B1+CyL7JZ2up+uFzXhkqml0W5zIY1I=
github.com/prometheus/procfs v0.15.1 h1:YagwOFzUgYfKKHX6Dr+sHT7km/hxC76UB0learggepc=
github.com/prometheus/procfs v0.15.1/go.mod h1:fB45yRUv8NstnjriLhBQLuOUt+WW4BsoGhij/e3PBqk=
github.com/redis/go-redis/extra/rediscmd/v9 v9.0.5 h1:EaDatTxkdHG+U3Bk4EUr+DZ7fOGwTfezUiUJMaIcaho=
github.com/redis/go-redis/extra/rediscmd/v9 v9.0.5/go.mod h1:fyalQWdtzDBECAQFBJuQe5bzQ02jGd5Qcbgb97Flm7U=
github.com/redis/go-redis/extra/redisotel/v9 v9.0.5 h1:EfpWLLCyXw8PSM2/XNJLjI3Pb27yVE+gIAfeqp8LUCc=
github.com/redis/go-redis/extra/redisotel/v9 v9.0.5/go.mod h1:WZjPDy7VNzn77AAfnAfVjZNvfJTYfPetfZk5yoSTLaQ=
github.com/redis/go-redis/v9 v9.7.3 h1:YpPyAayJV+XErNsatSElgRZZVCwXX9QzkKYNvO7x0wM=
github.com/redis/go-redis/v9 v9.7.3/go.mod h1:bGUrSggJ9X9GUmZpZNEOQKaANxSGgOEBRltRTZHSvrA=
github.com/rivo/tview v0.0.0-20240818110301-fd649dbf1223 h1:N+DggyldbUDqFlk0b8JeRjB9zGpmQ8wiKpq+VBbzRso=
github.com/rivo/tview v0.0.0-20240818110301-fd649dbf1223/go.mod h1:02iFIz7K/A9jGCvrizLPvoqr4cEIx7q54RH5Qudkrss=
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
github.com/rivo/uniseg v0.4.3/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88=
github.com/rivo/uniseg v0.4.7 h1:WUdvkW8uEhrYfLC4ZzdpI2ztxP1I582+49Oc5Mq64VQ=
github.com/rivo/uniseg v0.4.7/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88=
github.com/robertkrimen/otto v0.2.1 h1:FVP0PJ0AHIjC+N4pKCG9yCDz6LHNPCwi/GKID5pGGF0=
github.com/robertkrimen/otto v0.2.1/go.mod h1:UPwtJ1Xu7JrLcZjNWN8orJaM5n5YEtqL//farB5FlRY=
github.com/rogpeppe/go-internal v1.13.1 h1:KvO1DLK/DRN07sQ1LQKScxyZJuNnedQ5/wKSR38lUII=
github.com/rogpeppe/go-internal v1.13.1/go.mod h1:uMEvuHeurkdAXX61udpOXGD/AzZDWNMNyH2VO9fmH0o=
github.com/rs/xid v1.4.0/go.mod h1:trrq9SKmegXys3aeAKXMUTdJsYXVwGY3RLcfgqegfbg=
github.com/rs/zerolog v1.28.0 h1:MirSo27VyNi7RJYP3078AA1+Cyzd2GB66qy3aUHvsWY=
github.com/rs/zerolog v1.28.0/go.mod h1:NILgTygv/Uej1ra5XxGf82ZFSLk58MFGAUS2o6usyD0=
github.com/rubenv/sql-migrate v1.8.0 h1:dXnYiJk9k3wetp7GfQbKJcPHjVJL6YK19tKj8t2Ns0o=
github.com/rubenv/sql-migrate v1.8.0/go.mod h1:F2bGFBwCU+pnmbtNYDeKvSuvL6lBVtXDXUUv5t+u1qw=
github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf35Ld67mk=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/sergi/go-diff v1.2.0 h1:XU+rvMAioB0UC3q1MFrIQy4Vo5/4VsRDQQXHsEya6xQ=
github.com/sergi/go-diff v1.2.0/go.mod h1:STckp+ISIX8hZLjrqAeVduY0gWCT9IjLuqbuNXdaHfM=
github.com/shopspring/decimal v1.4.0 h1:bxl37RwXBklmTi0C79JfXCEBD1cqqHt0bbgBAGFp81k=
github.com/shopspring/decimal v1.4.0/go.mod h1:gawqmDU56v4yIKSwfBSFip1HdCCXN8/+DMd9qYNcwME=
github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ=
github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
github.com/spf13/cast v1.7.0 h1:ntdiHjuueXFgm5nzDRdOS4yfT43P5Fnud6DH50rz/7w=
github.com/spf13/cast v1.7.0/go.mod h1:ancEpBxwJDODSW/UG4rDrAqiKolqNNh2DX3mk86cAdo=
github.com/spf13/cobra v1.9.1 h1:CXSaggrXdbHK9CF+8ywj8Amf7PBRmPCOJugH954Nnlo=
github.com/spf13/cobra v1.9.1/go.mod h1:nDyEzZ8ogv936Cinf6g1RU9MRY64Ir93oCnqb9wxYW0=
github.com/spf13/pflag v1.0.6 h1:jFzHGLGAlb3ruxLB8MhbI6A8+AQX/2eW4qeyNZXNp2o=
github.com/spf13/pflag v1.0.6/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
github.com/stretchr/objx v0.5.2 h1:xuMeJ0Sdp5ZMRXx/aWO6RZxdr3beISkG5/G/aIRr3pY=
github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/tanqiangyes/grep-go v0.0.0-20220515134556-b36bff9c3d8e h1:+qDZ81UqxfZsWK6Vq9wET3AsdQxHGbViYOqkNxZ9FnU=
github.com/tanqiangyes/grep-go v0.0.0-20220515134556-b36bff9c3d8e/go.mod h1:ANZlXE3vfRYCYnkojePl2hJODYmOeCVD+XahuhDdTbI=
github.com/vishvananda/netlink v1.1.0/go.mod h1:cTgwzPIzzgDAYoQrMm0EdrjRUBkTqKYppBueQtXaqoE=
github.com/vishvananda/netns v0.0.0-20191106174202-0a2b9b5464df/go.mod h1:JP3t17pCcGlemwknint6hfoeCVQrEMVwxRLRjXpq+BU=
github.com/vishvananda/netns v0.0.0-20210104183010-2eb08e3e575f/go.mod h1:DD4vA1DwXk04H54A1oHXtwZmA0grkVMdPxx/VGLCah0=
github.com/x448/float16 v0.8.4 h1:qLwI1I70+NjRFUR3zs1JPUCgaCXSh3SW62uAKT1mSBM=
github.com/x448/float16 v0.8.4/go.mod h1:14CWIYCyZA/cWjXOioeEpHeN/83MdbZDRQHoFcYsOfg=
github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU=
github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb h1:zGWFAtiMcyryUHoUjUJX0/lt1H2+i2Ka2n+D3DImSNo=
github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU=
github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 h1:EzJWgHovont7NscjpAxXsDA8S8BMYve8Y5+7cuRE7R0=
github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415/go.mod h1:GwrjFmJcFw6At/Gs6z4yjiIwzuJ1/+UwLxMQDVQXShQ=
github.com/xeipuuv/gojsonschema v1.2.0 h1:LhYJRs+L4fBtjZUfuSZIKGeVu0QRy8e5Xi7D17UxZ74=
github.com/xeipuuv/gojsonschema v1.2.0/go.mod h1:anYRn/JVcOK2ZgGU+IjEV4nwlhoK5sQluxsYJ78Id3Y=
github.com/xlab/treeprint v1.2.0 h1:HzHnuAF1plUN2zGlAFHbSQP2qJ0ZAD3XF5XD7OesXRQ=
github.com/xlab/treeprint v1.2.0/go.mod h1:gj5Gd3gPdKtR1ikdDK6fnFLdmIS0X30kTTuNd/WEJu0=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA=
go.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A=
go.opentelemetry.io/contrib/bridges/prometheus v0.57.0 h1:UW0+QyeyBVhn+COBec3nGhfnFe5lwB0ic1JBVjzhk0w=
go.opentelemetry.io/contrib/bridges/prometheus v0.57.0/go.mod h1:ppciCHRLsyCio54qbzQv0E4Jyth/fLWDTJYfvWpcSVk=
go.opentelemetry.io/contrib/exporters/autoexport v0.57.0 h1:jmTVJ86dP60C01K3slFQa2NQ/Aoi7zA+wy7vMOKD9H4=
go.opentelemetry.io/contrib/exporters/autoexport v0.57.0/go.mod h1:EJBheUMttD/lABFyLXhce47Wr6DPWYReCzaZiXadH7g=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.58.0 h1:yd02MEjBdJkG3uabWP9apV+OuWRIXGDuJEUJbOHmCFU=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.58.0/go.mod h1:umTcuxiv1n/s/S6/c2AT/g2CQ7u5C59sHDNmfSwgz7Q=
go.opentelemetry.io/otel v1.33.0 h1:/FerN9bax5LoK51X/sI0SVYrjSE0/yUL7DpxW4K3FWw=
go.opentelemetry.io/otel v1.33.0/go.mod h1:SUUkR6csvUQl+yjReHu5uM3EtVV7MBm5FHKRlNx4I8I=
go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploggrpc v0.8.0 h1:WzNab7hOOLzdDF/EoWCt4glhrbMPVMOO5JYTmpz36Ls=
go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploggrpc v0.8.0/go.mod h1:hKvJwTzJdp90Vh7p6q/9PAOd55dI6WA6sWj62a/JvSs=
go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttp v0.8.0 h1:S+LdBGiQXtJdowoJoQPEtI52syEP/JYBUpjO49EQhV8=
go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttp v0.8.0/go.mod h1:5KXybFvPGds3QinJWQT7pmXf+TN5YIa7CNYObWRkj50=
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.32.0 h1:j7ZSD+5yn+lo3sGV69nW04rRR0jhYnBwjuX3r0HvnK0=
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.32.0/go.mod h1:WXbYJTUaZXAbYd8lbgGuvih0yuCfOFC5RJoYnoLcGz8=
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.32.0 h1:t/Qur3vKSkUCcDVaSumWF2PKHt85pc7fRvFuoVT8qFU=
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.32.0/go.mod h1:Rl61tySSdcOJWoEgYZVtmnKdA0GeKrSqkHC1t+91CH8=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.33.0 h1:Vh5HayB/0HHfOQA7Ctx69E/Y/DcQSMPpKANYVMQ7fBA=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.33.0/go.mod h1:cpgtDBaqD/6ok/UG0jT15/uKjAY8mRA53diogHBg3UI=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.33.0 h1:5pojmb1U1AogINhN3SurB+zm/nIcusopeBNp42f45QM=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.33.0/go.mod h1:57gTHJSE5S1tqg+EKsLPlTWhpHMsWlVmer+LA926XiA=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.32.0 h1:cMyu9O88joYEaI47CnQkxO1XZdpoTF9fEnW2duIddhw=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.32.0/go.mod h1:6Am3rn7P9TVVeXYG+wtcGE7IE1tsQ+bP3AuWcKt/gOI=
go.opentelemetry.io/otel/exporters/prometheus v0.54.0 h1:rFwzp68QMgtzu9PgP3jm9XaMICI6TsofWWPcBDKwlsU=
go.opentelemetry.io/otel/exporters/prometheus v0.54.0/go.mod h1:QyjcV9qDP6VeK5qPyKETvNjmaaEc7+gqjh4SS0ZYzDU=
go.opentelemetry.io/otel/exporters/stdout/stdoutlog v0.8.0 h1:CHXNXwfKWfzS65yrlB2PVds1IBZcdsX8Vepy9of0iRU=
go.opentelemetry.io/otel/exporters/stdout/stdoutlog v0.8.0/go.mod h1:zKU4zUgKiaRxrdovSS2amdM5gOc59slmo/zJwGX+YBg=
go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.32.0 h1:SZmDnHcgp3zwlPBS2JX2urGYe/jBKEIT6ZedHRUyCz8=
go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.32.0/go.mod h1:fdWW0HtZJ7+jNpTKUR0GpMEDP69nR8YBJQxNiVCE3jk=
go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.32.0 h1:cC2yDI3IQd0Udsux7Qmq8ToKAx1XCilTQECZ0KDZyTw=
go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.32.0/go.mod h1:2PD5Ex6z8CFzDbTdOlwyNIUywRr1DN0ospafJM1wJ+s=
go.opentelemetry.io/otel/log v0.8.0 h1:egZ8vV5atrUWUbnSsHn6vB8R21G2wrKqNiDt3iWertk=
go.opentelemetry.io/otel/log v0.8.0/go.mod h1:M9qvDdUTRCopJcGRKg57+JSQ9LgLBrwwfC32epk5NX8=
go.opentelemetry.io/otel/metric v1.33.0 h1:r+JOocAyeRVXD8lZpjdQjzMadVZp2M4WmQ+5WtEnklQ=
go.opentelemetry.io/otel/metric v1.33.0/go.mod h1:L9+Fyctbp6HFTddIxClbQkjtubW6O9QS3Ann/M82u6M=
go.opentelemetry.io/otel/sdk v1.33.0 h1:iax7M131HuAm9QkZotNHEfstof92xM+N8sr3uHXc2IM=
go.opentelemetry.io/otel/sdk v1.33.0/go.mod h1:A1Q5oi7/9XaMlIWzPSxLRWOI8nG3FnzHJNbiENQuihM=
go.opentelemetry.io/otel/sdk/log v0.8.0 h1:zg7GUYXqxk1jnGF/dTdLPrK06xJdrXgqgFLnI4Crxvs=
go.opentelemetry.io/otel/sdk/log v0.8.0/go.mod h1:50iXr0UVwQrYS45KbruFrEt4LvAdCaWWgIrsN3ZQggo=
go.opentelemetry.io/otel/sdk/metric v1.32.0 h1:rZvFnvmvawYb0alrYkjraqJq0Z4ZUJAiyYCU9snn1CU=
go.opentelemetry.io/otel/sdk/metric v1.32.0/go.mod h1:PWeZlq0zt9YkYAp3gjKZ0eicRYvOh1Gd+X99x6GHpCQ=
go.opentelemetry.io/otel/trace v1.33.0 h1:cCJuF7LRjUFso9LPnEAHJDB2pqzp+hbO8eu1qqW2d/s=
go.opentelemetry.io/otel/trace v1.33.0/go.mod h1:uIcdVUZMpTAmz0tI1z04GoVSezK37CbGV4fr1f2nBck=
go.opentelemetry.io/proto/otlp v1.4.0 h1:TA9WRvW6zMwP+Ssb6fLoUIuirti1gGbP28GcKG1jgeg=
go.opentelemetry.io/proto/otlp v1.4.0/go.mod h1:PPBWZIP98o2ElSqI35IHfu7hIhSwvc5N38Jw8pXuGFY=
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.39.0 h1:SHs+kF4LP+f+p14esP5jAoDpHU8Gu/v9lFRK6IT5imM=
golang.org/x/crypto v0.39.0/go.mod h1:L+Xg3Wf6HoL4Bn4238Z6ft6KfEpN0tJGo53AAPC632U=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/mod v0.25.0 h1:n7a+ZbQKQA/Ysbyb0/6IbB1H/X41mKgbhfv7AfG/44w=
golang.org/x/mod v0.25.0/go.mod h1:IXM97Txy2VM4PJ3gI61r1YEk/gAj6zAHN3AdZt6S9Ww=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.40.0 h1:79Xs7wF06Gbdcg4kdCCIQArK11Z1hr5POQ6+fIYHNuY=
golang.org/x/net v0.40.0/go.mod h1:y0hY0exeL2Pku80/zKK7tpntoX23cqL3Oa6njdgRtds=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.28.0 h1:CrgCKl8PPAVtLnU3c+EDw6x11699EWlsDeWNWKdIOkc=
golang.org/x/oauth2 v0.28.0/go.mod h1:onh5ek6nERTohokkhCD/y2cV4Do3fxFHFuAejCkRWT8=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.15.0 h1:KWH3jNZsfyT6xfAfKiz6MRNmd46ByHDYaZ7KSkCtdW8=
golang.org/x/sync v0.15.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190606203320-7fc4e5ec1444/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200116001909-b77594299b42/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200217220822-9197077df867/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210927094055-39ccf1dd6fa6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.33.0 h1:q3i8TbbEz+JRD9ywIRlyRAQbM0qF7hu24q3teo2hbuw=
golang.org/x/sys v0.33.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
golang.org/x/term v0.17.0/go.mod h1:lLRBjIVuehSbZlaOtGMbcMncT+aqLLLmKrsjNrUguwk=
golang.org/x/term v0.32.0 h1:DR4lr0TjUs3epypdhTOkMmuF5CDFJ/8pOnbzMZPQ7bg=
golang.org/x/term v0.32.0/go.mod h1:uZG1FhGx848Sqfsq4/DlJr3xGGsYMu/L5GW4abiaEPQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.26.0 h1:P42AVeLghgTYr4+xUnTRKDMqpar+PtX7KWuNQL21L8M=
golang.org/x/text v0.26.0/go.mod h1:QK15LZJUUQVJxhz7wXgxSy/CJaTFjd0G+YLonydOVQA=
golang.org/x/time v0.9.0 h1:EsRrnYcQiGH+5FfbgvV4AP7qEZstoyrHB0DzarOQ4ZY=
golang.org/x/time v0.9.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
golang.org/x/tools v0.33.0 h1:4qz2S3zmRxbGIhDIAgjxvFutSvH5EfnsYrRBj0UI0bc=
golang.org/x/tools v0.33.0/go.mod h1:CIJMaWEY88juyUfo7UbgPqbC8rU2OqfAV1h2Qp0oMYI=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20220907171357-04be3eba64a2 h1:H2TDz8ibqkAF6YGhCdN3jS9O0/s90v0rJh3X/OLHEUk=
golang.org/x/xerrors v0.0.0-20220907171357-04be3eba64a2/go.mod h1:K8+ghG5WaK9qNqU5K3HdILfMLy1f3aNYFI/wnl100a8=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/genproto v0.0.0-20240123012728-ef4313101c80 h1:KAeGQVN3M9nD0/bQXnr/ClcEMJ968gUXJQ9pwfSynuQ=
google.golang.org/genproto/googleapis/api v0.0.0-20241209162323-e6fa225c2576 h1:CkkIfIt50+lT6NHAVoRYEyAvQGFM7xEwXUUywFvEb3Q=
google.golang.org/genproto/googleapis/api v0.0.0-20241209162323-e6fa225c2576/go.mod h1:1R3kvZ1dtP3+4p4d3G8uJ8rFk/fWlScl38vanWACI08=
google.golang.org/genproto/googleapis/rpc v0.0.0-20241209162323-e6fa225c2576 h1:8ZmaLZE4XWrtU3MyClkYqqtl6Oegr3235h7jxsDyqCY=
google.golang.org/genproto/googleapis/rpc v0.0.0-20241209162323-e6fa225c2576/go.mod h1:5uTbfoYQed2U9p3KIj2/Zzm02PYhndfdmML0qC3q3FU=
google.golang.org/grpc v1.68.1 h1:oI5oTa11+ng8r8XMMN7jAOmWfPZWbYpCFaMUTACxkM0=
google.golang.org/grpc v1.68.1/go.mod h1:+q1XYFJjShcqn0QZHvCyeR4CXPA+llXIeUIfIe00waw=
google.golang.org/protobuf v1.36.5 h1:tPhr+woSbjfYvY6/GPufUoYizxw1cF/yFoxJ2fmpwlM=
google.golang.org/protobuf v1.36.5/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
gopkg.in/evanphx/json-patch.v4 v4.12.0 h1:n6jtcsulIzXPJaxegRbvFNNrZDjbij7ny3gmSPG+6V4=
gopkg.in/evanphx/json-patch.v4 v4.12.0/go.mod h1:p8EYWUEYMpynmqDbY58zCKCFZw8pRWMG4EsWvDvM72M=
gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc=
gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
gopkg.in/sourcemap.v1 v1.0.5 h1:inv58fC9f9J3TK2Y2R1NPntXEn3/wjWHkonhIUODNTI=
gopkg.in/sourcemap.v1 v1.0.5/go.mod h1:2RlvNNSMglmRrcvhfuzp4hQHwOtjxlbjX7UPY/GXb78=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
helm.sh/helm/v3 v3.18.4 h1:pNhnHM3nAmDrxz6/UC+hfjDY4yeDATQCka2/87hkZXQ=
helm.sh/helm/v3 v3.18.4/go.mod h1:WVnwKARAw01iEdjpEkP7Ii1tT1pTPYfM1HsakFKM3LI=
k8s.io/api v0.33.2 h1:YgwIS5jKfA+BZg//OQhkJNIfie/kmRsO0BmNaVSimvY=
k8s.io/api v0.33.2/go.mod h1:fhrbphQJSM2cXzCWgqU29xLDuks4mu7ti9vveEnpSXs=
k8s.io/apiextensions-apiserver v0.33.2 h1:6gnkIbngnaUflR3XwE1mCefN3YS8yTD631JXQhsU6M8=
k8s.io/apiextensions-apiserver v0.33.2/go.mod h1:IvVanieYsEHJImTKXGP6XCOjTwv2LUMos0YWc9O+QP8=
k8s.io/apimachinery v0.33.2 h1:IHFVhqg59mb8PJWTLi8m1mAoepkUNYmptHsV+Z1m5jY=
k8s.io/apimachinery v0.33.2/go.mod h1:BHW0YOu7n22fFv/JkYOEfkUYNRN0fj0BlvMFWA7b+SM=
k8s.io/apiserver v0.33.2 h1:KGTRbxn2wJagJowo29kKBp4TchpO1DRO3g+dB/KOJN4=
k8s.io/apiserver v0.33.2/go.mod h1:9qday04wEAMLPWWo9AwqCZSiIn3OYSZacDyu/AcoM/M=
k8s.io/cli-runtime v0.33.2 h1:koNYQKSDdq5AExa/RDudXMhhtFasEg48KLS2KSAU74Y=
k8s.io/cli-runtime v0.33.2/go.mod h1:gnhsAWpovqf1Zj5YRRBBU7PFsRc6NkEkwYNQE+mXL88=
k8s.io/client-go v0.33.2 h1:z8CIcc0P581x/J1ZYf4CNzRKxRvQAwoAolYPbtQes+E=
k8s.io/client-go v0.33.2/go.mod h1:9mCgT4wROvL948w6f6ArJNb7yQd7QsvqavDeZHvNmHo=
k8s.io/component-base v0.33.2 h1:sCCsn9s/dG3ZrQTX/Us0/Sx2R0G5kwa0wbZFYoVp/+0=
k8s.io/component-base v0.33.2/go.mod h1:/41uw9wKzuelhN+u+/C59ixxf4tYQKW7p32ddkYNe2k=
k8s.io/klog/v2 v2.130.1 h1:n9Xl7H1Xvksem4KFG4PYbdQCQxqc/tTUyrgXaOhHSzk=
k8s.io/klog/v2 v2.130.1/go.mod h1:3Jpz1GvMt720eyJH1ckRHK1EDfpxISzJ7I9OYgaDtPE=
k8s.io/kube-openapi v0.0.0-20250318190949-c8a335a9a2ff h1:/usPimJzUKKu+m+TE36gUyGcf03XZEP0ZIKgKj35LS4=
k8s.io/kube-openapi v0.0.0-20250318190949-c8a335a9a2ff/go.mod h1:5jIi+8yX4RIb8wk3XwBo5Pq2ccx4FP10ohkbSKCZoK8=
k8s.io/kubectl v0.33.2 h1:7XKZ6DYCklu5MZQzJe+CkCjoGZwD1wWl7t/FxzhMz7Y=
k8s.io/kubectl v0.33.2/go.mod h1:8rC67FB8tVTYraovAGNi/idWIK90z2CHFNMmGJZJ3KI=
k8s.io/utils v0.0.0-20241104100929-3ea5e8cea738 h1:M3sRQVHv7vB20Xc2ybTt7ODCeFj6JSWYFzOFnYeS6Ro=
k8s.io/utils v0.0.0-20241104100929-3ea5e8cea738/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0=
oras.land/oras-go/v2 v2.6.0 h1:X4ELRsiGkrbeox69+9tzTu492FMUu7zJQW6eJU+I2oc=
oras.land/oras-go/v2 v2.6.0/go.mod h1:magiQDfG6H1O9APp+rOsvCPcW1GD2MM7vgnKY0Y+u1o=
sigs.k8s.io/json v0.0.0-20241010143419-9aa6b5e7a4b3 h1:/Rv+M11QRah1itp8VhT6HoVx1Ray9eB4DBr+K+/sCJ8=
sigs.k8s.io/json v0.0.0-20241010143419-9aa6b5e7a4b3/go.mod h1:18nIHnGi6636UCz6m8i4DhaJ65T6EruyzmoQqI2BVDo=
sigs.k8s.io/kustomize/api v0.19.0 h1:F+2HB2mU1MSiR9Hp1NEgoU2q9ItNOaBJl0I4Dlus5SQ=
sigs.k8s.io/kustomize/api v0.19.0/go.mod h1:/BbwnivGVcBh1r+8m3tH1VNxJmHSk1PzP5fkP6lbL1o=
sigs.k8s.io/kustomize/kyaml v0.19.0 h1:RFge5qsO1uHhwJsu3ipV7RNolC7Uozc0jUBC/61XSlA=
sigs.k8s.io/kustomize/kyaml v0.19.0/go.mod h1:FeKD5jEOH+FbZPpqUghBP8mrLjJ3+zD3/rf9NNu1cwY=
sigs.k8s.io/randfill v0.0.0-20250304075658-069ef1bbf016/go.mod h1:XeLlZ/jmk4i1HRopwe7/aU3H5n1zNUcX6TM94b3QxOY=
sigs.k8s.io/randfill v1.0.0 h1:JfjMILfT8A6RbawdsK2JXGBR5AQVfd+9TbzrlneTyrU=
sigs.k8s.io/randfill v1.0.0/go.mod h1:XeLlZ/jmk4i1HRopwe7/aU3H5n1zNUcX6TM94b3QxOY=
sigs.k8s.io/structured-merge-diff/v4 v4.6.0 h1:IUA9nvMmnKWcj5jl84xn+T5MnlZKThmUW1TdblaLVAc=
sigs.k8s.io/structured-merge-diff/v4 v4.6.0/go.mod h1:dDy58f92j70zLsuZVuUX5Wp9vtxXpaZnkPGWeqDfCps=
sigs.k8s.io/yaml v1.4.0 h1:Mk1wCc2gy/F0THH0TAp1QYyJNzRm2KCLy3o5ASXVI5E=
sigs.k8s.io/yaml v1.4.0/go.mod h1:Ejl7/uTz7PSA4eKMyQCUTnhZYNmLIl+5c2lQPGR2BPY=
07070100000029000041ED000000000000000000000002689B9CB300000000000000000000000000000000000000000000002000000000kubeshark-cli-52.8.1/helm-chart0707010000002A000081A4000000000000000000000001689B9CB300000256000000000000000000000000000000000000002B00000000kubeshark-cli-52.8.1/helm-chart/Chart.yamlapiVersion: v2
name: kubeshark
version: "52.8.1"
description: The API Traffic Analyzer for Kubernetes
home: https://kubeshark.co
keywords:
  - kubeshark
  - packet capture
  - traffic capture
  - traffic analyzer
  - network sniffer
  - observability
  - devops
  - microservice
  - forensics
  - api
kubeVersion: '>= 1.16.0-0'
maintainers:
  - email: info@kubeshark.co
    name: Kubeshark
    url: https://kubeshark.co
sources:
  - https://github.com/kubeshark/kubeshark/tree/master/helm-chart
type: application
icon: https://raw.githubusercontent.com/kubeshark/assets/master/logo/vector/logo.svg
0707010000002B000081A4000000000000000000000001689B9CB300002A05000000000000000000000000000000000000002800000000kubeshark-cli-52.8.1/helm-chart/LICENSE
                                 Apache License
                           Version 2.0, January 2004
                        https://www.apache.org/licenses/

   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

   1. Definitions.

      "License" shall mean the terms and conditions for use, reproduction,
      and distribution as defined by Sections 1 through 9 of this document.

      "Licensor" shall mean the copyright owner or entity authorized by
      the copyright owner that is granting the License.

      "Legal Entity" shall mean the union of the acting entity and all
      other entities that control, are controlled by, or are under common
      control with that entity. For the purposes of this definition,
      "control" means (i) the power, direct or indirect, to cause the
      direction or management of such entity, whether by contract or
      otherwise, or (ii) ownership of fifty percent (50%) or more of the
      outstanding shares, or (iii) beneficial ownership of such entity.

      "You" (or "Your") shall mean an individual or Legal Entity
      exercising permissions granted by this License.

      "Source" form shall mean the preferred form for making modifications,
      including but not limited to software source code, documentation
      source, and configuration files.

      "Object" form shall mean any form resulting from mechanical
      transformation or translation of a Source form, including but
      not limited to compiled object code, generated documentation,
      and conversions to other media types.

      "Work" shall mean the work of authorship, whether in Source or
      Object form, made available under the License, as indicated by a
      copyright notice that is included in or attached to the work
      (an example is provided in the Appendix below).

      "Derivative Works" shall mean any work, whether in Source or Object
      form, that is based on (or derived from) the Work and for which the
      editorial revisions, annotations, elaborations, or other modifications
      represent, as a whole, an original work of authorship. For the purposes
      of this License, Derivative Works shall not include works that remain
      separable from, or merely link (or bind by name) to the interfaces of,
      the Work and Derivative Works thereof.

      "Contribution" shall mean any work of authorship, including
      the original version of the Work and any modifications or additions
      to that Work or Derivative Works thereof, that is intentionally
      submitted to Licensor for inclusion in the Work by the copyright owner
      or by an individual or Legal Entity authorized to submit on behalf of
      the copyright owner. For the purposes of this definition, "submitted"
      means any form of electronic, verbal, or written communication sent
      to the Licensor or its representatives, including but not limited to
      communication on electronic mailing lists, source code control systems,
      and issue tracking systems that are managed by, or on behalf of, the
      Licensor for the purpose of discussing and improving the Work, but
      excluding communication that is conspicuously marked or otherwise
      designated in writing by the copyright owner as "Not a Contribution."

      "Contributor" shall mean Licensor and any individual or Legal Entity
      on behalf of whom a Contribution has been received by Licensor and
      subsequently incorporated within the Work.

   2. Grant of Copyright License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      copyright license to reproduce, prepare Derivative Works of,
      publicly display, publicly perform, sublicense, and distribute the
      Work and such Derivative Works in Source or Object form.

   3. Grant of Patent License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      (except as stated in this section) patent license to make, have made,
      use, offer to sell, sell, import, and otherwise transfer the Work,
      where such license applies only to those patent claims licensable
      by such Contributor that are necessarily infringed by their
      Contribution(s) alone or by combination of their Contribution(s)
      with the Work to which such Contribution(s) was submitted. If You
      institute patent litigation against any entity (including a
      cross-claim or counterclaim in a lawsuit) alleging that the Work
      or a Contribution incorporated within the Work constitutes direct
      or contributory patent infringement, then any patent licenses
      granted to You under this License for that Work shall terminate
      as of the date such litigation is filed.

   4. Redistribution. You may reproduce and distribute copies of the
      Work or Derivative Works thereof in any medium, with or without
      modifications, and in Source or Object form, provided that You
      meet the following conditions:

      (a) You must give any other recipients of the Work or
          Derivative Works a copy of this License; and

      (b) You must cause any modified files to carry prominent notices
          stating that You changed the files; and

      (c) You must retain, in the Source form of any Derivative Works
          that You distribute, all copyright, patent, trademark, and
          attribution notices from the Source form of the Work,
          excluding those notices that do not pertain to any part of
          the Derivative Works; and

      (d) If the Work includes a "NOTICE" text file as part of its
          distribution, then any Derivative Works that You distribute must
          include a readable copy of the attribution notices contained
          within such NOTICE file, excluding those notices that do not
          pertain to any part of the Derivative Works, in at least one
          of the following places: within a NOTICE text file distributed
          as part of the Derivative Works; within the Source form or
          documentation, if provided along with the Derivative Works; or,
          within a display generated by the Derivative Works, if and
          wherever such third-party notices normally appear. The contents
          of the NOTICE file are for informational purposes only and
          do not modify the License. You may add Your own attribution
          notices within Derivative Works that You distribute, alongside
          or as an addendum to the NOTICE text from the Work, provided
          that such additional attribution notices cannot be construed
          as modifying the License.

      You may add Your own copyright statement to Your modifications and
      may provide additional or different license terms and conditions
      for use, reproduction, or distribution of Your modifications, or
      for any such Derivative Works as a whole, provided Your use,
      reproduction, and distribution of the Work otherwise complies with
      the conditions stated in this License.

   5. Submission of Contributions. Unless You explicitly state otherwise,
      any Contribution intentionally submitted for inclusion in the Work
      by You to the Licensor shall be under the terms and conditions of
      this License, without any additional terms or conditions.
      Notwithstanding the above, nothing herein shall supersede or modify
      the terms of any separate license agreement you may have executed
      with Licensor regarding such Contributions.

   6. Trademarks. This License does not grant permission to use the trade
      names, trademarks, service marks, or product names of the Licensor,
      except as required for reasonable and customary use in describing the
      origin of the Work and reproducing the content of the NOTICE file.

   7. Disclaimer of Warranty. Unless required by applicable law or
      agreed to in writing, Licensor provides the Work (and each
      Contributor provides its Contributions) on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
      implied, including, without limitation, any warranties or conditions
      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
      PARTICULAR PURPOSE. You are solely responsible for determining the
      appropriateness of using or redistributing the Work and assume any
      risks associated with Your exercise of permissions under this License.

   8. Limitation of Liability. In no event and under no legal theory,
      whether in tort (including negligence), contract, or otherwise,
      unless required by applicable law (such as deliberate and grossly
      negligent acts) or agreed to in writing, shall any Contributor be
      liable to You for damages, including any direct, indirect, special,
      incidental, or consequential damages of any character arising as a
      result of this License or out of the use or inability to use the
      Work (including but not limited to damages for loss of goodwill,
      work stoppage, computer failure or malfunction, or any and all
      other commercial damages or losses), even if such Contributor
      has been advised of the possibility of such damages.

   9. Accepting Warranty or Additional Liability. While redistributing
      the Work or Derivative Works thereof, You may choose to offer,
      and charge a fee for, acceptance of support, warranty, indemnity,
      or other liability obligations and/or rights consistent with this
      License. However, in accepting such obligations, You may act only
      on Your own behalf and on Your sole responsibility, not on behalf
      of any other Contributor, and only if You agree to indemnify,
      defend, and hold each Contributor harmless for any liability
      incurred by, or claims asserted against, such Contributor by reason
      of your accepting any such warranty or additional liability.

   END OF TERMS AND CONDITIONS

   Copyright 2022 Kubeshark

   Licensed under the Apache License, Version 2.0 (the "License");
   you may not use this file except in compliance with the License.
   You may obtain a copy of the License at

       https://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.
0707010000002C000081A4000000000000000000000001689B9CB300007EFA000000000000000000000000000000000000002A00000000kubeshark-cli-52.8.1/helm-chart/README.md# Helm Chart of Kubeshark

## Official

Add the Helm repo for Kubeshark:

```shell
helm repo add kubeshark https://helm.kubeshark.co
```

then install Kubeshark:

```shell
helm install kubeshark kubeshark/kubeshark
```

## Local

Clone the repo:

```shell
git clone git@github.com:kubeshark/kubeshark.git --depth 1
cd kubeshark/helm-chart
```

In case you want to clone a specific tag of the repo (e.g. `v52.3.59`):

```shell
git clone git@github.com:kubeshark/kubeshark.git --depth 1 --branch <tag>
cd kubeshark/helm-chart
```
> See the list of available tags here: https://github.com/kubeshark/kubeshark/tags

Render the templates

```shell
helm template .
```

Install Kubeshark:

```shell
helm install kubeshark .
```

Uninstall Kubeshark:

```shell
helm uninstall kubeshark
```

## Port-forward

Do the port forwarding:

```shell
kubectl port-forward service/kubeshark-front 8899:80
```

Visit [localhost:8899](http://localhost:8899)

You can also use `kubeshark proxy` for a more stable port-forward connection.

## Add a License Key

When it's necessary, you can use:

```shell
--set license=YOUR_LICENSE_GOES_HERE
```

Get your license from Kubeshark's [Admin Console](https://console.kubeshark.co/).

## Installing with Ingress (EKS) enabled

```shell
helm install kubeshark kubeshark/kubeshark -f values.yaml
```

Set this `value.yaml`:
```shell
tap:
  ingress:
    enabled: true
    className: "alb"
    host: ks.example.com
    tls: []
    annotations:
      alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:7..8:certificate/b...65c
      alb.ingress.kubernetes.io/target-type: ip
      alb.ingress.kubernetes.io/scheme: internet-facing
```

## Disabling IPV6

Not all have IPV6 enabled, hence this has to be disabled as follows:

```shell
helm install kubeshark kubeshark/kubeshark \
  --set tap.ipv6=false
```

## Prometheus Metrics

Please refer to [metrics](./metrics.md) documentation for details.

## Override Tag, Tags, Images

In addition to using a private registry, you can further override the images' tag, specific image tags and specific image names.

Example for overriding image names:

```yaml
  docker:
    overrideImage:
      worker: docker.io/kubeshark/worker:v52.3.87
      front:  docker.io/kubeshark/front:v52.3.87
      hub:    docker.io/kubeshark/hub:v52.3.87
```

## Configuration

| Parameter                                 | Description                                   | Default                                                 |
|-------------------------------------------|-----------------------------------------------|---------------------------------------------------------|
| `tap.docker.registry`                     | Docker registry to pull from                  | `docker.io/kubeshark`                                   |
| `tap.docker.tag`                          | Tag of the Docker images                      | `latest`                                                |
| `tap.docker.tagLocked`                    | Lock the Docker image tags to prevent automatic upgrades to the latest branch image version. | `true`   |
| `tap.docker.tagLocked`                    | If `false` - use latest minor tag             | `true`                                                  |
| `tap.docker.imagePullPolicy`              | Kubernetes image pull policy                  | `Always`                                                |
| `tap.docker.imagePullSecrets`             | Kubernetes secrets to pull the images         | `[]`                                                    |
| `tap.docker.overrideImage`                | Can be used to directly override image names  | `""`                                                    |
| `tap.docker.overrideTag`                  | Can be used to override image tags            | `""`                                                    |
| `tap.proxy.hub.srvPort`                   | Hub server port. Change if already occupied.  | `8898`                                                  |
| `tap.proxy.worker.srvPort`                | Worker server port. Change if already occupied.| `48999`                                                |
| `tap.proxy.front.port`                    | Front service port. Change if already occupied.| `8899`                                                 |
| `tap.proxy.host`                          | Change to 0.0.0.0 top open up to the world.   | `127.0.0.1`                                             |
| `tap.regex`                               | Target (process traffic from) pods that match regex | `.*`                                              |
| `tap.namespaces`                          | Target pods in namespaces                     | `[]`                                                    |
| `tap.excludedNamespaces`                  | Exclude pods in namespaces                    | `[]`                                                    |
| `tap.bpfOverride`                         | When using AF_PACKET as a traffic capture backend, override any existing pod targeting rules and set explicit BPF expression (e.g. `net 0.0.0.0/0`).                                                          | `[]`                                                    |
| `tap.capture.stopped`                             | Set to `false` to have traffic processing start automatically. When set to `true`, traffic processing is stopped by default, resulting in almost no resource consumption (e.g. Kubeshark is dormant). This property can be dynamically control via the dashboard.      | `false`                                                                                                                                                |
| `tap.capture.stopAfter`                             | Set to a duration (e.g. `30s`) to have traffic processing stop after no websocket activity between worker and hub.     | `30s`                                                                                                                                                |
| `tap.release.repo`                        | URL of the Helm chart repository              | `https://helm.kubeshark.co`                             |
| `tap.release.name`                        | Helm release name                             | `kubeshark`                                             |
| `tap.release.namespace`                   | Helm release namespace                        | `default`                                               |
| `tap.persistentStorage`                   | Use `persistentVolumeClaim` instead of `emptyDir` | `false`                                             |
| `tap.persistentStorageStatic`             | Use static persistent volume provisioning (explicitly defined `PersistentVolume` ) | `false`            |
| `tap.persistentStoragePvcVolumeMode` | Set the pvc volume mode (Filesystem\|Block) | `Filesystem` |
| `tap.efsFileSytemIdAndPath`               | [EFS file system ID and, optionally, subpath and/or access point](https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/master/examples/kubernetes/access_points/README.md) `<FileSystemId>:<Path>:<AccessPointId>`  | ""                             |
| `tap.storageLimit`                        | Limit of either the `emptyDir` or `persistentVolumeClaim` | `5Gi`                                     |
| `tap.storageClass`                        | Storage class of the `PersistentVolumeClaim`          | `standard`                                      |
| `tap.dryRun`                              | Preview of all pods matching the regex, without tapping them    | `false`                               |
| `tap.dnsConfig.nameservers`               | Nameservers to use for DNS resolution          | `[]`                                                    |
| `tap.dnsConfig.searches`                  | Search domains to use for DNS resolution       | `[]`                                                    |
| `tap.dnsConfig.options`                   | DNS options to use for DNS resolution          | `[]`                                                    |
| `tap.resources.hub.limits.cpu`            | CPU limit for hub                             | `""`  (no limit)                                                 |
| `tap.resources.hub.limits.memory`         | Memory limit for hub                          | `5Gi`                                                |
| `tap.resources.hub.requests.cpu`          | CPU request for hub                           | `50m`                                                   |
| `tap.resources.hub.requests.memory`       | Memory request for hub                        | `50Mi`                                                  |
| `tap.resources.sniffer.limits.cpu`        | CPU limit for sniffer                         | `""`  (no limit)                                                    |
| `tap.resources.sniffer.limits.memory`     | Memory limit for sniffer                      | `3Gi`                                                |
| `tap.resources.sniffer.requests.cpu`      | CPU request for sniffer                       | `50m`                                                   |
| `tap.resources.sniffer.requests.memory`   | Memory request for sniffer                    | `50Mi`                                                  |
| `tap.resources.tracer.limits.cpu`         | CPU limit for tracer                          | `""`  (no limit)                                                     |
| `tap.resources.tracer.limits.memory`      | Memory limit for tracer                       | `3Gi`                                                |
| `tap.resources.tracer.requests.cpu`       | CPU request for tracer                        | `50m`                                                   |
| `tap.resources.tracer.requests.memory`    | Memory request for tracer                     | `50Mi`                                                  |
| `tap.probes.hub.initialDelaySeconds`      | Initial delay before probing the hub         | `15`                                                    |
| `tap.probes.hub.periodSeconds`            | Period between probes for the hub             | `10`                                                    |
| `tap.probes.hub.successThreshold`         | Number of successful probes before considering the hub healthy | `1`                                        |
| `tap.probes.hub.failureThreshold`         | Number of failed probes before considering the hub unhealthy | `3`                                           |
| `tap.probes.sniffer.initialDelaySeconds`  | Initial delay before probing the sniffer     | `15`                                                    |
| `tap.probes.sniffer.periodSeconds`        | Period between probes for the sniffer         | `10`                                                    |
| `tap.probes.sniffer.successThreshold`     | Number of successful probes before considering the sniffer healthy | `1`                                    |
| `tap.probes.sniffer.failureThreshold`     | Number of failed probes before considering the sniffer unhealthy | `3`                                       |
| `tap.serviceMesh`                         | Capture traffic from service meshes like Istio, Linkerd, Consul, etc.          | `true`                                                  |
| `tap.tls`                                 | Capture the encrypted/TLS traffic from cryptography libraries like OpenSSL                         | `true`                                                  |
| `tap.disableTlsLog`                       | Suppress logging for TLS/eBPF                 | `true`                                                 |
| `tap.labels`                              | Kubernetes labels to apply to all Kubeshark resources  | `{}`                                                    |
| `tap.annotations`                         | Kubernetes annotations to apply to all Kubeshark resources | `{}`                                                |
| `tap.nodeSelectorTerms.workers`                   | Node selector terms for workers components                       | `[{"matchExpressions":[{"key":"kubernetes.io/os","operator":"In","values":["linux"]}]}]` |
| `tap.nodeSelectorTerms.hub`                   | Node selector terms for hub component                 | `[{"matchExpressions":[{"key":"kubernetes.io/os","operator":"In","values":["linux"]}]}]` |
| `tap.nodeSelectorTerms.front`                   | Node selector terms for front-end component                         | `[{"matchExpressions":[{"key":"kubernetes.io/os","operator":"In","values":["linux"]}]}]` |
| `tap.priorityClass`                   | Priority class name for Kubeshark components                         | `""`                                                |
| `tap.tolerations.workers`                  | Tolerations for workers components                         | `[ {"operator": "Exists", "effect": "NoExecute"}` |
| `tap.tolerations.hub`                  | Tolerations for hub component                         | `[]` |
| `tap.tolerations.front`                  | Tolerations for front-end component                         | `[]` |
| `tap.auth.enabled`                        | Enable authentication                         | `false`                                                 |
| `tap.auth.type`                           | Authentication type (1 option available: `saml`)      | `saml`                                              |
| `tap.auth.approvedEmails`                 | List of approved email addresses for authentication              | `[]`                                                    |
| `tap.auth.approvedDomains`                | List of approved email domains for authentication                | `[]`                                                    |
| `tap.auth.saml.idpMetadataUrl`                    | SAML IDP metadata URL <br/>(effective, if `tap.auth.type = saml`)                                  | ``                                                      |
| `tap.auth.saml.x509crt`                   | A self-signed X.509 `.cert` contents <br/>(effective, if `tap.auth.type = saml`)          | ``                                                      |
| `tap.auth.saml.x509key`                   | A self-signed X.509 `.key` contents <br/>(effective, if `tap.auth.type = saml`)           | ``                                                      |
| `tap.auth.saml.roleAttribute`             | A SAML attribute name corresponding to user's authorization role <br/>(effective, if `tap.auth.type = saml`)  | `role` |
| `tap.auth.saml.roles`                     | A list of SAML authorization roles and their permissions <br/>(effective, if `tap.auth.type = saml`)  | `{"admin":{"canDownloadPCAP":true,"canUpdateTargetedPods":true,"canUseScripting":true, "scriptingPermissions":{"canSave":true, "canActivate":true, "canDelete":true}, "canStopTrafficCapturing":true, "filter":"","showAdminConsoleLink":true}}` |
| `tap.ingress.enabled`                     | Enable `Ingress`                                | `false`                                                 |
| `tap.ingress.className`                   | Ingress class name                            | `""`                                                    |
| `tap.ingress.host`                        | Host of the `Ingress`                          | `ks.svc.cluster.local`                                  |
| `tap.ingress.tls`                         | `Ingress` TLS configuration                     | `[]`                                                    |
| `tap.ingress.annotations`                 | `Ingress` annotations                           | `{}`                                                    |
| `tap.routing.front.basePath`             | Set this value to serve `front` under specific base path. Example: `/custompath` (forward slash must be present)         | `""`       |
| `tap.ipv6`                                | Enable IPv6 support for the front-end                        | `true`                                                  |
| `tap.debug`                               | Enable debug mode                             | `false`                                                 |
| `tap.telemetry.enabled`                   | Enable anonymous usage statistics collection           | `true`                                                  |
| `tap.resourceGuard.enabled`               | Enable resource guard worker process, which watches RAM/disk usage and enables/disables traffic capture based on available resources | `false` |
| `tap.secrets`                             | List of secrets to be used as source for environment variables (e.g. `kubeshark-license`) | `[]`                                                    |
| `tap.sentry.enabled`                      | Enable sending of error logs to Sentry          | `true` (only for qualified users)                                                  |
| `tap.sentry.environment`                      | Sentry environment to label error logs with      | `production`                                                  |
| `tap.defaultFilter`                       | Sets the default dashboard KFL filter (e.g. `http`). By default, this value is set to filter out noisy protocols such as DNS, UDP, ICMP and TCP. The user can easily change this, **temporarily**, in the Dashboard. For a permanent change, you should change this value in the `values.yaml` or `config.yaml` file.        | `"!dns and !error"`                                    |
| `tap.liveConfigMapChangesDisabled`        | If set to `true`, all user functionality (scripting, targeting settings, global & default KFL modification, traffic recording, traffic capturing on/off, protocol dissectors) involving dynamic ConfigMap changes from UI will be disabled     | `false`      |
| `tap.globalFilter`                        | Prepends to any KFL filter and can be used to limit what is visible in the dashboard. For example, `redact("request.headers.Authorization")` will redact the appropriate field. Another example `!dns` will not show any DNS traffic.      | `""`                                        |
| `tap.metrics.port`                  | Pod port used to expose Prometheus metrics          | `49100`                                                  |
| `tap.enabledDissectors`                   | This is an array of strings representing the list of supported protocols. Remove or comment out redundant protocols (e.g., dns).| The default list excludes: `udp` and `tcp` |
| `tap.mountBpf`                            | BPF filesystem needs to be mounted for eBPF to work properly. This helm value determines whether Kubeshark will attempt to mount the filesystem. This option is not required if filesystem is already mounts. │ `true`|
| `tap.gitops.enabled`                          | Enable GitOps functionality. This will allow you to use GitOps to manage your Kubeshark configuration. | `false` |
| `logs.file`                               | Logs dump path                      | `""`                                                    |
| `pcapdump.enabled`                        | Enable recording of all traffic captured according to other parameters. Whatever Kubeshark captures, considering pod targeting rules, will be stored in pcap files ready to be viewed by tools                 | `true`                                                                                                  |
| `pcapdump.maxTime`                        | The time window into the past that will be stored. Older traffic will be discarded.  | `2h`  |
| `pcapdump.maxSize`                        | The maximum storage size the PCAP files will consume. Old files that cause to surpass storage consumption will get discarded.   | `500MB`  |
| `kube.configPath`                         | Path to the `kubeconfig` file (`$HOME/.kube/config`)            | `""`                                                    |
| `kube.context`                            | Kubernetes context to use for the deployment  | `""`                                                    |
| `dumpLogs`                                | Enable dumping of logs         | `false`                                                 |
| `headless`                                | Enable running in headless mode               | `false`                                                 |
| `license`                                 | License key for the Pro/Enterprise edition    | `""`                                                    |
| `scripting.env`                           | Environment variables for the scripting      | `{}`                                                    |
| `scripting.source`                        | Source directory of the scripts                | `""`                                                    |
| `scripting.watchScripts`                  | Enable watch mode for the scripts in source directory          | `true`                                                  |
| `timezone`                                | IANA time zone applied to time shown in the front-end | `""` (local time zone applies) |
| `supportChatEnabled`                      | Enable real-time support chat channel based on Intercom | `false` |
| `internetConnectivity`                    | Turns off API requests that are dependent on Internet connectivity such as `telemetry` and `online-support`. | `true` |

KernelMapping pairs kernel versions with a
                            DriverContainer image. Kernel versions can be matched
                            literally or using a regular expression

# Installing with SAML enabled

### Prerequisites:

##### 1. Generate X.509 certificate & key (TL;DR: https://ubuntu.com/server/docs/security-certificates)

**Example:**
```
openssl genrsa -out mykey.key 2048
openssl req -new -key mykey.key -out mycsr.csr
openssl x509 -signkey mykey.key -in mycsr.csr -req -days 365 -out mycert.crt
```

**What you get:**
- `mycert.crt` - use it for `tap.auth.saml.x509crt`
- `mykey.key` - use it for `tap.auth.saml.x509crt`

##### 2. Prepare your SAML IDP

You should set up the required SAML IDP (Google, Auth0, your custom IDP, etc.)

During setup, an IDP provider will typically request to enter:
- Metadata URL
- ACS URL (Assertion Consumer Service URL, aka Callback URL)
- SLO URL (Single Logout URL)

Correspondingly, you will enter these (if you run the most default Kubeshark setup):
- [http://localhost:8899/saml/metadata](http://localhost:8899/saml/metadata)
- [http://localhost:8899/saml/acs](http://localhost:8899/saml/acs)
- [http://localhost:8899/saml/slo](http://localhost:8899/saml/slo)

Otherwise, if you have `tap.ingress.enabled == true`, change protocol & domain respectively - showing example domain:
- [https://kubeshark.example.com/saml/metadata](https://kubeshark.example.com/saml/metadata)
- [https://kubeshark.example.com/saml/acs](https://kubeshark.example.com/saml/acs)
- [https://kubeshark.example.com/saml/slo](https://kubeshark.example.com/saml/slo)

```shell
helm install kubeshark kubeshark/kubeshark -f values.yaml
```

Set this `value.yaml`:
```shell
tap:
  auth:
    enabled: true
    type: saml
    saml:
      idpMetadataUrl: "https://ti..th0.com/samlp/metadata/MpWiDCM..qdnDG"
      x509crt: |
        -----BEGIN CERTIFICATE-----
        MIIDlTCCAn0CFFRUzMh+dZvp+FvWd4gRaiBVN8EvMA0GCSqGSIb3DQEBCwUAMIGG
        MSQwIgYJKoZIhvcNAQkBFhV3ZWJtYXN0ZXJAZXhhbXBsZS5jb20wHhcNMjMxMjI4
        ........<redacted: please, generate your own X.509 cert>........
        ZMzM7YscqZwoVhTOhrD4/5nIfOD/hTWG/MBe2Um1V1IYF8aVEllotTKTgsF6ZblA
        miCOgl6lIlZy
        -----END CERTIFICATE-----
      x509key: |
        -----BEGIN PRIVATE KEY-----
        MIIEvwIBADANBgkqhkiG9w0BAQEFAASCBKkwggSlAgEAAoIBAQDlgDFKsRHj+mok
        euOF0IpwToOEpQGtafB75ytv3psD/tQAzEIug+rkDriVvsfcvafj0qcaTeYvnCoz
        ........<redacted: please, generate your own X.509 key>.........
        sUpBCu0E3nRJM/QB2ui5KhNR7uvPSL+kSsaEq19/mXqsL+mRi9aqy2wMEvUSU/kt
        UaV5sbRtTzYLxpOSQyi8CEFA+A==
        -----END PRIVATE KEY-----
```

# Installing with Dex OIDC authentication

[**Click here to see full docs**](https://docs.kubeshark.co/en/saml#installing-with-oidc-enabled-dex-idp).

Choose this option, if **you already have a running instance** of Dex in your cluster &
you want to set up Dex OIDC authentication for Kubeshark users.

Kubeshark supports authentication using [Dex - A Federated OpenID Connect Provider](https://dexidp.io/).
Dex is an abstraction layer designed for integrating a wide variety of Identity Providers.

**Requirement:**
Your Dex IdP must have a publicly accessible URL.

### Pre-requisites:

**1. If you configured Ingress for Kubeshark:**

(see section: "Installing with Ingress (EKS) enabled")

OAuth2 callback URL is: <br/>
`https://<kubeshark-ingress-hostname>/api/oauth2/callback`

**2. If you did not configure Ingress for Kubeshark:**

OAuth2 callback URL is: <br/>
`http://0.0.0.0:8899/api/oauth2/callback`

Use chosen OAuth2 callback URL to replace `<your-kubeshark-host>` in Step 3.

**3. Add this static client to your Dex IdP configuration (`config.yaml`):**
```yaml
staticClients:
   - id: kubeshark
     secret: create your own client password
     name: Kubeshark
     redirectURIs:
     - https://<your-kubeshark-host>/api/oauth2/callback
```

**Final step:**

Add these helm values to set up OIDC authentication powered by your Dex IdP:

```yaml
# values.yaml

tap:
  auth:
    enabled: true
    type: dex
    dexOidc:
      issuer: <put Dex IdP issuer URL here>
      clientId: kubeshark
      clientSecret: create your own client password
      refreshTokenLifetime: "3960h" # 165 days
      oauth2StateParamExpiry: "10m"
      bypassSslCaCheck: false
```

---

**Note:**<br/>
Set `tap.auth.dexOidc.bypassSslCaCheck: true`
to allow Kubeshark communication with Dex IdP having an unknown SSL Certificate Authority.

This setting allows you to prevent such SSL CA-related errors:<br/>
`tls: failed to verify certificate: x509: certificate signed by unknown authority`

---

Once you run `helm install kubeshark kubeshark/kubeshark -f ./values.yaml`, Kubeshark will be installed with (Dex) OIDC authentication enabled.

---

# Installing your own Dex IdP along with Kubeshark

Choose this option, if **you need to deploy an instance of Dex IdP** along with Kubeshark &
set up Dex OIDC authentication for Kubeshark users.

Depending on Ingress enabled/disabled, your Dex configuration might differ.

**Requirement:**
Please, configure Ingress using `tap.ingress` for your Kubeshark installation. For example:

```yaml
tap:
  ingress:
    enabled: true
    className: "alb"
    host: ks.example.com
    tls: []
    annotations:
      alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:7..8:certificate/b...65c
      alb.ingress.kubernetes.io/target-type: ip
      alb.ingress.kubernetes.io/scheme: internet-facing
```

The following Dex settings will have these values:

| Setting                                               | Value                                        |
|-------------------------------------------------------|----------------------------------------------|
| `tap.auth.dexOidc.issuer`                             | `https://ks.example.com/dex`                 |
| `tap.auth.dexConfig.issuer`                           | `https://ks.example.com/dex`                 |
| `tap.auth.dexConfig.staticClients -> redirectURIs`    | `https://ks.example.com/api/oauth2/callback` |
| `tap.auth.dexConfig.connectors -> config.redirectURI` | `https://ks.example.com/dex/callback`        |

---

### Before proceeding with Dex IdP installation:

Please, make sure to prepare the following things first.

1. Choose **[Connectors](https://dexidp.io/docs/connectors/)** to enable in Dex IdP.
   - i.e. how many kind of "Log in with ..." options you'd like to offer your users
   - You will need to specify connectors in `tap.auth.dexConfig.connectors`
2. Choose type of **[Storage](https://dexidp.io/docs/configuration/storage/)** to use in Dex IdP.
   - You will need to specify storage settings in `tap.auth.dexConfig.storage`
   - default: `memory`
3. Decide on the OAuth2 `?state=` param expiration time:
   - field: `tap.auth.dexOidc.oauth2StateParamExpiry`
   - default: `10m` (10 minutes)
   - valid time units are `s`, `m`, `h`
4. Decide on the refresh token expiration:
    - field 1: `tap.auth.dexOidc.expiry.refreshTokenLifetime`
    - field 2: `tap.auth.dexConfig.expiry.refreshTokens.absoluteLifetime`
    - default: `3960h` (165 days)
    - valid time units are `s`, `m`, `h`
5. Create a unique & secure password to set in these fields:
    - field 1: `tap.auth.dexOidc.clientSecret`
    - field 2: `tap.auth.dexConfig.staticClients -> secret`
    - password must be the same for these 2 fields
6. Discover more possibilities of **[Dex Configuration](https://dexidp.io/docs/configuration/)**
   - if you decide to include more configuration options, make sure to add them into `tap.auth.dexConfig`
---

### Once you are ready with all the points described above:

Use these helm `values.yaml` fields to:
- Deploy your own instance of Dex IdP along with Kubeshark
- Enable OIDC authentication for Kubeshark users

Make sure to:
- Replace `<your-ingress-hostname>` with a correct Kubeshark Ingress host (`tap.auth.ingress.host`).
  - refer to section **Installing with Ingress (EKS) enabled** to find out how you can configure Ingress host.

Helm `values.yaml`:
```yaml
tap:
  auth:
    enabled: true
    type: dex
    dexOidc:
      issuer: https://<your-ingress-hostname>/dex

      # Client ID/secret must be taken from `tap.auth.dexConfig.staticClients -> id/secret`
      clientId: kubeshark
      clientSecret: create your own client password

      refreshTokenLifetime: "3960h" # 165 days
      oauth2StateParamExpiry: "10m"
      bypassSslCaCheck: false
    dexConfig:
      # This field is REQUIRED!
      #
      # The base path of Dex and the external name of the OpenID Connect service.
      # This is the canonical URL that all clients MUST use to refer to Dex. If a
      # path is provided, Dex's HTTP service will listen at a non-root URL.
      issuer: https://<your-ingress-hostname>/dex

      # Expiration configuration for tokens, signing keys, etc.
      expiry:
        refreshTokens:
          validIfNotUsedFor: "2160h" # 90 days
          absoluteLifetime: "3960h"  # 165 days

      # This field is REQUIRED!
      #
      # The storage configuration determines where Dex stores its state.
      # See the documentation (https://dexidp.io/docs/storage/) for further information.
      storage:
        type: memory

      # This field is REQUIRED!
      #
      # Attention:
      # Do not change this field and its values.
      # This field is required for internal Kubeshark-to-Dex communication.
      #
      # HTTP service configuration
      web:
        http: 0.0.0.0:5556

      # This field is REQUIRED!
      #
      # Attention:
      # Do not change this field and its values.
      # This field is required for internal Kubeshark-to-Dex communication.
      #
      # Telemetry configuration
      telemetry:
        http: 0.0.0.0:5558

      # This field is REQUIRED!
      #
      # Static clients registered in Dex by default.
      staticClients:
        - id: kubeshark
          secret: create your own client password
          name: Kubeshark
          redirectURIs:
          - https://<your-ingress-hostname>/api/oauth2/callback

      # Enable the password database.
      # It's a "virtual" connector (identity provider) that stores
      # login credentials in Dex's store.
      enablePasswordDB: true

      # Connectors are used to authenticate users against upstream identity providers.
      # See the documentation (https://dexidp.io/docs/connectors/) for further information.
      #
      # Attention:
      # When you define a new connector, `config.redirectURI` must be:
      # https://<your-ingress-hostname>/dex/callback
      #
      # Example with Google connector:
      # connectors:
      #  - type: google
      #    id: google
      #    name: Google
      #    config:
      #      clientID: your Google Cloud Auth app client ID
      #      clientSecret: your Google Auth app client ID
      #      redirectURI: https://<your-ingress-hostname>/dex/callback
      connectors: []
```
0707010000002D000081A4000000000000000000000001689B9CB30000098C000000000000000000000000000000000000002B00000000kubeshark-cli-52.8.1/helm-chart/metrics.md# Metrics

Kubeshark provides metrics from `worker` components.
It can be useful for monitoring and debugging purpose.

## Configuration

By default, Kubeshark uses port `49100` to expose metrics via service `kubeshark-worker-metrics`.

In case you use [kube-prometheus-stack] (https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack) community Helm chart, additional scrape configuration for Kubeshark worker metrics endpoint can be configured with values:

```
prometheus:
  enabled: true
  prometheusSpec:
    additionalScrapeConfigs: |
      - job_name: 'kubeshark-worker-metrics'
        kubernetes_sd_configs:
          - role: endpoints
        relabel_configs:
          - source_labels: [__meta_kubernetes_pod_name]
            target_label: pod
          - source_labels: [__meta_kubernetes_pod_node_name]
            target_label: node
          - source_labels: [__meta_kubernetes_endpoint_port_name]
            action: keep
            regex: ^metrics$
          - source_labels: [__address__, __meta_kubernetes_endpoint_port_number]
            action: replace
            regex: ([^:]+)(?::\d+)?
            replacement: $1:49100
            target_label: __address__
          - action: labelmap
            regex: __meta_kubernetes_service_label_(.+)
```


## Available metrics

| Name | Type | Description | 
| --- | --- | --- | 
| kubeshark_received_packets_total | Counter | Total number of packets received | 
| kubeshark_dropped_packets_total | Counter | Total number of packets dropped | 
| kubeshark_dropped_chunks_total  | Counter | Total number of dropped packet chunks | 
| kubeshark_processed_bytes_total | Counter | Total number of bytes processed |
| kubeshark_tcp_packets_total | Counter | Total number of TCP packets | 
| kubeshark_dns_packets_total | Counter | Total number of DNS packets | 
| kubeshark_icmp_packets_total | Counter | Total number of ICMP packets | 
| kubeshark_reassembled_tcp_payloads_total | Counter | Total number of reassembled TCP payloads |
| kubeshark_matched_pairs_total | Counter | Total number of matched pairs | 
| kubeshark_dropped_tcp_streams_total | Counter | Total number of dropped TCP streams | 
| kubeshark_live_tcp_streams | Gauge | Number of live TCP streams |

## Ready-to-use Dashboard

You can import a ready-to-use dashboard from [Grafana's Dashboards Portal](https://grafana.com/grafana/dashboards/21332-kubeshark-dashboard-v3-4/).
0707010000002E000041ED000000000000000000000002689B9CB300000000000000000000000000000000000000000000002A00000000kubeshark-cli-52.8.1/helm-chart/templates0707010000002F000081A4000000000000000000000001689B9CB3000001E0000000000000000000000000000000000000004200000000kubeshark-cli-52.8.1/helm-chart/templates/01-service-account.yaml---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    {{- include "kubeshark.labels" . | nindent 4 }}
  {{- if .Values.tap.annotations }}
  annotations:
    {{- toYaml .Values.tap.annotations | nindent 4 }}
  {{- end }}
  name: {{ include "kubeshark.serviceAccountName" . }}
  namespace: {{ .Release.Namespace }}
{{- if .Values.tap.docker.imagePullSecrets }}
imagePullSecrets:
  {{- range .Values.tap.docker.imagePullSecrets }}
  - name: {{ . }}
  {{- end }}
{{- end }}
07070100000030000081A4000000000000000000000001689B9CB30000066E000000000000000000000000000000000000003F00000000kubeshark-cli-52.8.1/helm-chart/templates/02-cluster-role.yaml---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    {{- include "kubeshark.labels" . | nindent 4 }}
  {{- if .Values.tap.annotations }}
  annotations:
    {{- toYaml .Values.tap.annotations | nindent 4 }}
  {{- end }}
  name: kubeshark-cluster-role-{{ .Release.Namespace }}
  namespace: {{ .Release.Namespace }}
rules:
  - apiGroups:
      - ""
      - extensions
      - apps
    resources:
      - nodes
      - pods
      - services
      - endpoints
      - persistentvolumeclaims
    verbs:
      - list
      - get
      - watch
  - apiGroups:
      - ""
    resources:
      - namespaces
    verbs:
      - get
      - list
      - watch
  - apiGroups:
    - networking.k8s.io
    resources:
    - networkpolicies
    verbs:
    - get
    - list
    - watch
    - create
    - update
    - delete
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  labels:
    {{- include "kubeshark.labels" . | nindent 4 }}
  annotations:
  {{- if .Values.tap.annotations }}
    {{- toYaml .Values.tap.annotations | nindent 4 }}
  {{- end }}
  name: kubeshark-self-config-role
  namespace: {{ .Release.Namespace }}
rules:
  - apiGroups:
      - ""
      - v1
    resourceNames:
      - kubeshark-secret
      - kubeshark-config-map
      - kubeshark-secret-default
      - kubeshark-config-map-default
    resources:
      - secrets
      - configmaps
    verbs:
      - create
      - get
      - watch
      - list
      - update
      - patch
      - delete
  - apiGroups:
      - ""
      - v1
    resources:
      - secrets
      - configmaps
      - pods/log
    verbs:
      - create
      - get
07070100000031000081A4000000000000000000000001689B9CB30000049A000000000000000000000000000000000000004700000000kubeshark-cli-52.8.1/helm-chart/templates/03-cluster-role-binding.yaml---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    {{- include "kubeshark.labels" . | nindent 4 }}
  {{- if .Values.tap.annotations }}
  annotations:
    {{- toYaml .Values.tap.annotations | nindent 4 }}
  {{- end }}
  name: kubeshark-cluster-role-binding-{{ .Release.Namespace }}
  namespace: {{ .Release.Namespace }}
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubeshark-cluster-role-{{ .Release.Namespace }}
subjects:
  - kind: ServiceAccount
    name: {{ include "kubeshark.serviceAccountName" . }}
    namespace: {{ .Release.Namespace }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    {{- include "kubeshark.labels" . | nindent 4 }}
  annotations:
  {{- if .Values.tap.annotations }}
    {{- toYaml .Values.tap.annotations | nindent 4 }}
  {{- end }}
  name: kubeshark-self-config-role-binding
  namespace: {{ .Release.Namespace }}
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubeshark-self-config-role
subjects:
  - kind: ServiceAccount
    name: {{ include "kubeshark.serviceAccountName" . }}
    namespace: {{ .Release.Namespace }}
07070100000032000081A4000000000000000000000001689B9CB300001868000000000000000000000000000000000000004100000000kubeshark-cli-52.8.1/helm-chart/templates/04-hub-deployment.yaml---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app.kubeshark.co/app: hub
    {{- include "kubeshark.labels" . | nindent 4 }}
  {{- if .Values.tap.annotations }}
  annotations:
    {{- toYaml .Values.tap.annotations | nindent 4 }}
  {{- end }}
  name: {{ include "kubeshark.name" . }}-hub
  namespace: {{ .Release.Namespace }}
spec:
  replicas: 1  # Set the desired number of replicas
  selector:
    matchLabels:
      app.kubeshark.co/app: hub
      {{- include "kubeshark.selectorLabels" . | nindent 6 }}
  template:
    metadata:
      labels:
        app.kubeshark.co/app: hub
        {{- include "kubeshark.labels" . | nindent 8 }}
    spec:
      dnsPolicy: ClusterFirstWithHostNet
      serviceAccountName: {{ include "kubeshark.serviceAccountName" . }}
      {{- if .Values.tap.priorityClass }}
      priorityClassName: {{ .Values.tap.priorityClass | quote }}
      {{- end }}
      containers:
        - name: hub
          command:
            - ./hub
            - -port
            - "8080"
            - -loglevel
            - '{{ .Values.logLevel | default "warning" }}'
            - -capture-stop-after
            - "{{ .Values.tap.capture.stopAfter | default "5m" }}"
            {{- if .Values.tap.gitops.enabled }}
            - -gitops
            {{- end }}
          {{- if .Values.tap.secrets }}
          envFrom:
            {{- range .Values.tap.secrets }}
            - secretRef:
                name: {{ . }}
            {{- end }}
          {{- end }}
          env:
          - name: POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          - name: SENTRY_ENABLED
            value: '{{ (include "sentry.enabled" .) }}'
          - name: SENTRY_ENVIRONMENT
            value: '{{ .Values.tap.sentry.environment }}'
          - name: KUBESHARK_CLOUD_API_URL
            value: 'https://api.kubeshark.co'
          - name: PROFILING_ENABLED
            value: '{{ .Values.tap.pprof.enabled }}'
        {{- if .Values.tap.docker.overrideImage.hub }}
          image: '{{ .Values.tap.docker.overrideImage.hub }}'
        {{- else if .Values.tap.docker.overrideTag.hub }}
          image: '{{ .Values.tap.docker.registry }}/hub:{{ .Values.tap.docker.overrideTag.hub }}'
        {{ else }}
          image: '{{ .Values.tap.docker.registry }}/hub:{{ not (eq .Values.tap.docker.tag "") | ternary .Values.tap.docker.tag (include "kubeshark.defaultVersion" .) }}'
        {{- end }}
          imagePullPolicy: {{ .Values.tap.docker.imagePullPolicy }}
          readinessProbe:
            periodSeconds: {{ .Values.tap.probes.hub.periodSeconds }}
            failureThreshold: {{ .Values.tap.probes.hub.failureThreshold }}
            successThreshold: {{ .Values.tap.probes.hub.successThreshold }}
            initialDelaySeconds: {{ .Values.tap.probes.hub.initialDelaySeconds }}
            tcpSocket:
              port: 8080
          livenessProbe:
            periodSeconds: {{ .Values.tap.probes.hub.periodSeconds }}
            failureThreshold: {{ .Values.tap.probes.hub.failureThreshold }}
            successThreshold: {{ .Values.tap.probes.hub.successThreshold }}
            initialDelaySeconds: {{ .Values.tap.probes.hub.initialDelaySeconds }}
            tcpSocket:
              port: 8080
          resources:
            limits:
              {{ if ne (toString .Values.tap.resources.hub.limits.cpu) "0" }}
              cpu: {{ .Values.tap.resources.hub.limits.cpu }}
              {{ end }}
              {{ if ne (toString .Values.tap.resources.hub.limits.memory) "0" }}
              memory: {{ .Values.tap.resources.hub.limits.memory }}
              {{ end }}
            requests:
              {{ if ne (toString .Values.tap.resources.hub.requests.cpu) "0" }}
              cpu: {{ .Values.tap.resources.hub.requests.cpu }}
              {{ end }}
              {{ if ne (toString .Values.tap.resources.hub.requests.memory) "0" }}
              memory: {{ .Values.tap.resources.hub.requests.memory }}
              {{ end }}
          volumeMounts:
          - name: saml-x509-volume
            mountPath: "/etc/saml/x509"
            readOnly: true
{{- if gt (len .Values.tap.nodeSelectorTerms.hub) 0}}
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              {{- toYaml .Values.tap.nodeSelectorTerms.hub | nindent 12 }}
{{- end }}
      {{- if or .Values.tap.dns.nameservers .Values.tap.dns.searches .Values.tap.dns.options }}
      dnsConfig:
        {{- if .Values.tap.dns.nameservers }}
        nameservers:
        {{- range .Values.tap.dns.nameservers }}
          - {{ . | quote }}
        {{- end }}
        {{- end }}
        {{- if .Values.tap.dns.searches }}
        searches:
        {{- range .Values.tap.dns.searches }}
          - {{ . | quote }}
        {{- end }}
        {{- end }}
        {{- if .Values.tap.dns.options }}
        options:
        {{- range .Values.tap.dns.options }}
          - name: {{ .name | quote }}
            {{- if .value }}
            value: {{ .value | quote }}
            {{- end }}
        {{- end }}
        {{- end }}
      {{- end }}
      {{- if .Values.tap.tolerations.hub }}
      tolerations:
      {{- range .Values.tap.tolerations.hub }}
        - key: {{ .key | quote }}
          operator: {{ .operator | quote }}
          {{- if .value }}
          value: {{ .value | quote }}
          {{- end }}
          {{- if .effect }}
          effect: {{ .effect | quote }}
          {{- end }}
          {{- if .tolerationSeconds }}
          tolerationSeconds: {{ .tolerationSeconds }}
          {{- end }}
      {{- end }}
      {{- end }}
      volumes:
      - name: saml-x509-volume
        projected:
          sources:
          - secret:
              name: kubeshark-saml-x509-crt-secret
              items:
              - key: AUTH_SAML_X509_CRT
                path: kubeshark.crt
          - secret:
              name: kubeshark-saml-x509-key-secret
              items:
              - key: AUTH_SAML_X509_KEY
                path: kubeshark.key
07070100000033000081A4000000000000000000000001689B9CB3000001C4000000000000000000000000000000000000003E00000000kubeshark-cli-52.8.1/helm-chart/templates/05-hub-service.yaml---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubeshark.co/app: hub
    {{- include "kubeshark.labels" . | nindent 4 }}
  {{- if .Values.tap.annotations }}
  annotations:
    {{- toYaml .Values.tap.annotations | nindent 4 }}
  {{- end }}
  name: kubeshark-hub
  namespace: {{ .Release.Namespace }}
spec:
  ports:
    - name: kubeshark-hub
      port: 80
      targetPort: 8080
  selector:
    app.kubeshark.co/app: hub
  type: ClusterIP
07070100000034000081A4000000000000000000000001689B9CB300001F0B000000000000000000000000000000000000004300000000kubeshark-cli-52.8.1/helm-chart/templates/06-front-deployment.yamlapiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app.kubeshark.co/app: front
    {{- include "kubeshark.labels" . | nindent 4 }}
  {{- if .Values.tap.annotations }}
  annotations:
    {{- toYaml .Values.tap.annotations | nindent 4 }}
  {{- end }}
  name: {{ include "kubeshark.name" . }}-front
  namespace: {{ .Release.Namespace }}
spec:
  replicas: 1  # Set the desired number of replicas
  selector:
    matchLabels:
      app.kubeshark.co/app: front
      {{- include "kubeshark.selectorLabels" . | nindent 6 }}
  template:
    metadata:
      labels:
        app.kubeshark.co/app: front
        {{- include "kubeshark.labels" . | nindent 8 }}
    spec:
      containers:
        - env:
            - name: REACT_APP_AUTH_ENABLED
              value: '{{- if or (and .Values.cloudLicenseEnabled (not (empty .Values.license))) (not .Values.internetConnectivity) -}}
                        {{ (and .Values.tap.auth.enabled (eq .Values.tap.auth.type "dex")) | ternary true false }}
                      {{- else -}}
                        {{ .Values.cloudLicenseEnabled | ternary "true" .Values.tap.auth.enabled }}
                      {{- end }}'
            - name: REACT_APP_AUTH_TYPE
              value: '{{- if and .Values.cloudLicenseEnabled (not (eq .Values.tap.auth.type "dex")) -}}
                        default
                      {{- else -}}
                        {{ .Values.tap.auth.type }}
                      {{- end }}'
            - name: REACT_APP_COMPLETE_STREAMING_ENABLED
              value: '{{- if and (hasKey .Values.tap "dashboard") (hasKey .Values.tap.dashboard "completeStreamingEnabled") -}}
                        {{ eq .Values.tap.dashboard.completeStreamingEnabled true | ternary "true" "false" }}
                      {{- else -}}
                        true
                      {{- end }}'
            - name: REACT_APP_AUTH_SAML_IDP_METADATA_URL
              value: '{{ not (eq .Values.tap.auth.saml.idpMetadataUrl "") | ternary .Values.tap.auth.saml.idpMetadataUrl " " }}'
            - name: REACT_APP_TIMEZONE
              value: '{{ not (eq .Values.timezone "") | ternary .Values.timezone " " }}'
            - name: REACT_APP_SCRIPTING_DISABLED
              value: '{{- if .Values.tap.liveConfigMapChangesDisabled -}}
                        {{- if .Values.demoModeEnabled -}}
                          {{ .Values.demoModeEnabled | ternary false true }}
                        {{- else -}}
                          true
                        {{- end }}
                      {{- else -}}
                        false
                      {{- end }}'
            - name: REACT_APP_TARGETED_PODS_UPDATE_DISABLED
              value: '{{ .Values.tap.liveConfigMapChangesDisabled }}'
            - name: REACT_APP_PRESET_FILTERS_CHANGING_ENABLED
              value: '{{ .Values.tap.liveConfigMapChangesDisabled | ternary "false" "true" }}'
            - name: REACT_APP_BPF_OVERRIDE_DISABLED
              value: '{{ eq .Values.tap.packetCapture "af_packet" | ternary "false" "true" }}'
            - name: REACT_APP_RECORDING_DISABLED
              value: '{{ .Values.tap.liveConfigMapChangesDisabled }}'
            - name: REACT_APP_STOP_TRAFFIC_CAPTURING_DISABLED
              value: '{{- if and .Values.tap.liveConfigMapChangesDisabled .Values.tap.capture.stopped -}}
                        false
                      {{- else -}}
                        {{ .Values.tap.liveConfigMapChangesDisabled | ternary "true" "false" }}
                      {{- end -}}'
            - name: 'REACT_APP_CLOUD_LICENSE_ENABLED'
              value: '{{- if or (and .Values.cloudLicenseEnabled (not (empty .Values.license))) (not .Values.internetConnectivity) -}}
                        "false"
                      {{- else -}}
                        {{ .Values.cloudLicenseEnabled }}
                      {{- end }}'
            - name: 'REACT_APP_AI_ASSISTANT_ENABLED'
              value: '{{ .Values.aiAssistantEnabled | ternary "true" "false" }}'
            - name: REACT_APP_SUPPORT_CHAT_ENABLED
              value: '{{ and .Values.supportChatEnabled .Values.internetConnectivity | ternary "true" "false" }}'
            - name: REACT_APP_BETA_ENABLED
              value: '{{ default false .Values.betaEnabled | ternary "true" "false" }}'
            - name: REACT_APP_DISSECTORS_UPDATING_ENABLED
              value: '{{ .Values.tap.liveConfigMapChangesDisabled | ternary "false" "true" }}'
            - name: REACT_APP_SENTRY_ENABLED
              value: '{{ (include "sentry.enabled" .) }}'
            - name: REACT_APP_SENTRY_ENVIRONMENT
              value: '{{ .Values.tap.sentry.environment }}'
        {{- if .Values.tap.docker.overrideImage.front }}
          image: '{{ .Values.tap.docker.overrideImage.front }}'
        {{- else if .Values.tap.docker.overrideTag.front }}
          image: '{{ .Values.tap.docker.registry }}/front:{{ .Values.tap.docker.overrideTag.front }}'
        {{ else }}
          image: '{{ .Values.tap.docker.registry }}/front:{{ not (eq .Values.tap.docker.tag "") | ternary .Values.tap.docker.tag (include "kubeshark.defaultVersion" .) }}'
        {{- end }}
          imagePullPolicy: {{ .Values.tap.docker.imagePullPolicy }}
          name: kubeshark-front
          livenessProbe:
            periodSeconds: 1
            failureThreshold: 3
            successThreshold: 1
            initialDelaySeconds: 3
            tcpSocket:
              port: 8080
          readinessProbe:
            periodSeconds: 1
            failureThreshold: 3
            successThreshold: 1
            initialDelaySeconds: 3
            tcpSocket:
              port: 8080
            timeoutSeconds: 1
          resources:
            limits:
              cpu: 750m
              memory: 1Gi
            requests:
              cpu: 50m
              memory: 50Mi
          volumeMounts:
            - name: nginx-config
              mountPath: /etc/nginx/conf.d/default.conf
              subPath: default.conf
              readOnly: true
{{- if gt (len .Values.tap.nodeSelectorTerms.front) 0}}
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              {{- toYaml .Values.tap.nodeSelectorTerms.front | nindent 12 }}
{{- end }}
      {{- if or .Values.tap.dns.nameservers .Values.tap.dns.searches .Values.tap.dns.options }}
      dnsConfig:
        {{- if .Values.tap.dns.nameservers }}
        nameservers:
        {{- range .Values.tap.dns.nameservers }}
          - {{ . | quote }}
        {{- end }}
        {{- end }}
        {{- if .Values.tap.dns.searches }}
        searches:
        {{- range .Values.tap.dns.searches }}
          - {{ . | quote }}
        {{- end }}
        {{- end }}
        {{- if .Values.tap.dns.options }}
        options:
        {{- range .Values.tap.dns.options }}
          - name: {{ .name | quote }}
            {{- if .value }}
            value: {{ .value | quote }}
            {{- end }}
        {{- end }}
        {{- end }}
      {{- end }}
      {{- if .Values.tap.tolerations.front }}
      tolerations:
      {{- range .Values.tap.tolerations.front }}
        - key: {{ .key | quote }}
          operator: {{ .operator | quote }}
          {{- if .value }}
          value: {{ .value | quote }}
          {{- end }}
          {{- if .effect }}
          effect: {{ .effect | quote }}
          {{- end }}
          {{- if .tolerationSeconds }}
          tolerationSeconds: {{ .tolerationSeconds }}
          {{- end }}
      {{- end }}
      {{- end }}
      volumes:
        - name: nginx-config
          configMap:
            name: kubeshark-nginx-config-map
      dnsPolicy: ClusterFirstWithHostNet
      serviceAccountName: {{ include "kubeshark.serviceAccountName" . }}
      {{- if .Values.tap.priorityClass }}
      priorityClassName: {{ .Values.tap.priorityClass | quote }}
      {{- end }}
07070100000035000081A4000000000000000000000001689B9CB3000001AC000000000000000000000000000000000000004000000000kubeshark-cli-52.8.1/helm-chart/templates/07-front-service.yaml---
apiVersion: v1
kind: Service
metadata:
  labels:
    {{- include "kubeshark.labels" . | nindent 4 }}
  {{- if .Values.tap.annotations }}
  annotations:
    {{- toYaml .Values.tap.annotations | nindent 4 }}
  {{- end }}
  name: kubeshark-front
  namespace: {{ .Release.Namespace }}
spec:
  ports:
    - name: kubeshark-front
      port: 80
      targetPort: 8080
  selector:
    app.kubeshark.co/app: front
  type: ClusterIP
07070100000036000081A4000000000000000000000001689B9CB300000481000000000000000000000000000000000000004A00000000kubeshark-cli-52.8.1/helm-chart/templates/08-persistent-volume-claim.yaml---
{{- if .Values.tap.persistentStorageStatic }}
apiVersion: v1
kind: PersistentVolume
metadata:
  name: kubeshark-persistent-volume
  namespace: {{ .Release.Namespace }}
spec:
  capacity:
    storage: {{ .Values.tap.storageLimit }}
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: {{ .Values.tap.storageClass }}
  {{- if .Values.tap.efsFileSytemIdAndPath }}
  csi:
    driver: efs.csi.aws.com
    volumeHandle: {{ .Values.tap.efsFileSytemIdAndPath }}
  {{ end }}
---
{{ end }}
{{- if .Values.tap.persistentStorage }}
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  labels:
    {{- include "kubeshark.labels" . | nindent 4 }}
  {{- if .Values.tap.annotations }}
  annotations:
    {{- toYaml .Values.tap.annotations | nindent 4 }}
  {{- end }}
  name: kubeshark-persistent-volume-claim
  namespace: {{ .Release.Namespace }}
spec:
  volumeMode: {{ .Values.tap.persistentStoragePvcVolumeMode }}
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: {{ .Values.tap.storageLimit }}
  storageClassName: {{ .Values.tap.storageClass }}
status: {}
{{- end }}
07070100000037000081A4000000000000000000000001689B9CB3000040B1000000000000000000000000000000000000004400000000kubeshark-cli-52.8.1/helm-chart/templates/09-worker-daemon-set.yaml---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    app.kubeshark.co/app: worker
    sidecar.istio.io/inject: "false"
    {{- include "kubeshark.labels" . | nindent 4 }}
  {{- if .Values.tap.annotations }}
  annotations:
    {{- toYaml .Values.tap.annotations | nindent 4 }}
  {{- end }}
  name: kubeshark-worker-daemon-set
  namespace: {{ .Release.Namespace }}
spec:
  selector:
    matchLabels:
      app.kubeshark.co/app: worker
      {{- include "kubeshark.selectorLabels" . | nindent 6 }}
  template:
    metadata:
      labels:
        app.kubeshark.co/app: worker
        {{- include "kubeshark.labels" . | nindent 8 }}
      name: kubeshark-worker-daemon-set
      namespace: kubeshark
    spec:
      {{- if or .Values.tap.mountBpf .Values.tap.persistentStorage}}
      initContainers:
      {{- end }}
      {{- if .Values.tap.mountBpf }}
        - command:
          - /bin/sh
          - -c
          - mkdir -p /sys/fs/bpf && mount | grep -q '/sys/fs/bpf' || mount -t bpf bpf /sys/fs/bpf
          {{- if .Values.tap.docker.overrideTag.worker }}
          image: '{{ .Values.tap.docker.registry }}/worker:{{ .Values.tap.docker.overrideTag.worker }}{{ include "kubeshark.dockerTagDebugVersion" . }}'
        {{ else }}
          image: '{{ .Values.tap.docker.registry }}/worker:{{ not (eq .Values.tap.docker.tag "") | ternary .Values.tap.docker.tag (include "kubeshark.defaultVersion" .) }}{{ include "kubeshark.dockerTagDebugVersion" . }}'
        {{- end }}
          imagePullPolicy: {{ .Values.tap.docker.imagePullPolicy }}
          name: mount-bpf
          securityContext:
            privileged: true
          volumeMounts:
          - mountPath: /sys
            name: sys
            mountPropagation: Bidirectional
      {{- end }}
      {{- if .Values.tap.persistentStorage }}
        - command:
          - /bin/sh
          - -c
          - mkdir -p /app/data/$NODE_NAME && rm -rf /app/data/$NODE_NAME/tracer_*
          {{- if .Values.tap.docker.overrideTag.worker }}
          image: '{{ .Values.tap.docker.registry }}/worker:{{ .Values.tap.docker.overrideTag.worker }}{{ include "kubeshark.dockerTagDebugVersion" . }}'
        {{ else }}
          image: '{{ .Values.tap.docker.registry }}/worker:{{ not (eq .Values.tap.docker.tag "") | ternary .Values.tap.docker.tag (include "kubeshark.defaultVersion" .) }}{{ include "kubeshark.dockerTagDebugVersion" . }}'
        {{- end }}
          imagePullPolicy: {{ .Values.tap.docker.imagePullPolicy }}
          name: cleanup-data-dir
          env:
            - name: NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
          volumeMounts:
          - mountPath: /app/data
            name: data
      {{- end }}
      containers:
        - command:
            - ./worker
            - -i
            - any
            - -port
            - '{{ .Values.tap.proxy.worker.srvPort }}'
            - -metrics-port
            - '{{ .Values.tap.metrics.port }}'
            - -packet-capture
            - '{{ .Values.tap.packetCapture }}'
            - -loglevel
            - '{{ .Values.logLevel | default "warning" }}'
          {{- if not .Values.tap.tls }}
            - -disable-tracer
          {{- end }}
          {{- if .Values.tap.serviceMesh }}
            - -servicemesh
          {{- end }}
            - -procfs
            - /hostproc
          {{- if .Values.tap.resourceGuard.enabled }}
            - -enable-resource-guard
          {{- end }}
          {{- if .Values.tap.watchdog.enabled }}
            - -enable-watchdog
          {{- end }}
            - -resolution-strategy
            - '{{ .Values.tap.misc.resolutionStrategy }}'
            - -staletimeout
            - '{{ .Values.tap.misc.staleTimeoutSeconds }}'
        {{- if .Values.tap.docker.overrideImage.worker }}
          image: '{{ .Values.tap.docker.overrideImage.worker }}'
        {{- else if .Values.tap.docker.overrideTag.worker }}
          image: '{{ .Values.tap.docker.registry }}/worker:{{ .Values.tap.docker.overrideTag.worker }}{{ include "kubeshark.dockerTagDebugVersion" . }}'
        {{ else }}
          image: '{{ .Values.tap.docker.registry }}/worker:{{ not (eq .Values.tap.docker.tag "") | ternary .Values.tap.docker.tag (include "kubeshark.defaultVersion" .) }}{{ include "kubeshark.dockerTagDebugVersion" . }}'
        {{- end }}
          imagePullPolicy: {{ .Values.tap.docker.imagePullPolicy }}
          name: sniffer
          ports:
            - containerPort: {{ .Values.tap.metrics.port }}
              protocol: TCP
              name: metrics
          env:
          - name: POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          - name: TCP_STREAM_CHANNEL_TIMEOUT_MS
            value: '{{ .Values.tap.misc.tcpStreamChannelTimeoutMs }}'
          - name: TCP_STREAM_CHANNEL_TIMEOUT_SHOW
            value: '{{ .Values.tap.misc.tcpStreamChannelTimeoutShow }}'
          - name: KUBESHARK_CLOUD_API_URL
            value: 'https://api.kubeshark.co'
          - name: PROFILING_ENABLED
            value: '{{ .Values.tap.pprof.enabled }}'
          - name: SENTRY_ENABLED
            value: '{{ (include "sentry.enabled" .) }}'
          - name: SENTRY_ENVIRONMENT
            value: '{{ .Values.tap.sentry.environment }}'
          resources:
            limits:
              {{ if ne (toString .Values.tap.resources.sniffer.limits.cpu) "0" }}
              cpu: {{ .Values.tap.resources.sniffer.limits.cpu }}
              {{ end }}
              {{ if ne (toString .Values.tap.resources.sniffer.limits.memory) "0" }}
              memory: {{ .Values.tap.resources.sniffer.limits.memory }}
              {{ end }}
            requests:
              {{ if ne (toString .Values.tap.resources.sniffer.requests.cpu) "0" }}
              cpu: {{ .Values.tap.resources.sniffer.requests.cpu }}
              {{ end }}
              {{ if ne (toString .Values.tap.resources.sniffer.requests.memory) "0" }}
              memory: {{ .Values.tap.resources.sniffer.requests.memory }}
              {{ end }}
          securityContext:
            privileged: {{ .Values.tap.securityContext.privileged }}
            {{- if not .Values.tap.securityContext.privileged }}
            {{- $aaProfile := .Values.tap.securityContext.appArmorProfile }}
            {{- $selinuxOpts := .Values.tap.securityContext.seLinuxOptions }}
            {{- if or (ne $aaProfile.type "") (ne $aaProfile.localhostProfile "") }}
            appArmorProfile:
              {{- if ne $aaProfile.type "" }}
              type: {{ $aaProfile.type }}
              {{- end }}
              {{- if ne $aaProfile.localhostProfile "" }}
              localhostProfile: {{ $aaProfile.localhostProfile }}
              {{- end }}
            {{- end }}
            {{- if or (ne $selinuxOpts.level "") (ne $selinuxOpts.role "") (ne $selinuxOpts.type "") (ne $selinuxOpts.user "") }}
            seLinuxOptions:
              {{- if ne $selinuxOpts.level "" }}
              level: {{ $selinuxOpts.level }}
              {{- end }}
              {{- if ne $selinuxOpts.role "" }}
              role: {{ $selinuxOpts.role }}
              {{- end }}
              {{- if ne $selinuxOpts.type "" }}
              type: {{ $selinuxOpts.type }}
              {{- end }}
              {{- if ne $selinuxOpts.user "" }}
              user: {{ $selinuxOpts.user }}
              {{- end }}
            {{- end }}
            capabilities:
              add:
                {{- range .Values.tap.securityContext.capabilities.networkCapture }}
                {{ print "- " . }}
                {{- end }}
                {{- if .Values.tap.serviceMesh }}
                {{- range .Values.tap.securityContext.capabilities.serviceMeshCapture }}
                {{ print "- " . }}
                {{- end }}
                {{- end }}
                {{- if .Values.tap.securityContext.capabilities.ebpfCapture }}
                {{- range .Values.tap.securityContext.capabilities.ebpfCapture }}
                {{ print "- " . }}
                {{- end }}
                {{- end }}
              drop:
                - ALL
            {{- end }}
          readinessProbe:
            periodSeconds: {{ .Values.tap.probes.sniffer.periodSeconds }}
            failureThreshold: {{ .Values.tap.probes.sniffer.failureThreshold }}
            successThreshold: {{ .Values.tap.probes.sniffer.successThreshold }}
            initialDelaySeconds: {{ .Values.tap.probes.sniffer.initialDelaySeconds }}
            tcpSocket:
              port: {{ .Values.tap.proxy.worker.srvPort }}
          livenessProbe:
            periodSeconds: {{ .Values.tap.probes.sniffer.periodSeconds }}
            failureThreshold: {{ .Values.tap.probes.sniffer.failureThreshold }}
            successThreshold: {{ .Values.tap.probes.sniffer.successThreshold }}
            initialDelaySeconds: {{ .Values.tap.probes.sniffer.initialDelaySeconds }}
            tcpSocket:
              port: {{ .Values.tap.proxy.worker.srvPort }}
          volumeMounts:
            - mountPath: /hostproc
              name: proc
              readOnly: true
            - mountPath: /sys
              name: sys
              readOnly: true
              mountPropagation: HostToContainer
            - mountPath: /app/data
              name: data
      {{- if .Values.tap.tls }}
        - command:
            - ./tracer
            - -procfs
            - /hostproc
          {{- if .Values.tap.disableTlsLog }}
            - -disable-tls-log
          {{- end }}
          {{- if .Values.tap.pprof.enabled }}
            - -port
            - '{{ add .Values.tap.proxy.worker.srvPort 1 }}'
          {{- end }}
            - -loglevel
            - '{{ .Values.logLevel | default "warning" }}'
        {{- if .Values.tap.docker.overrideTag.worker }}
          image: '{{ .Values.tap.docker.registry }}/worker:{{ .Values.tap.docker.overrideTag.worker }}{{ include "kubeshark.dockerTagDebugVersion" . }}'
        {{ else }}
          image: '{{ .Values.tap.docker.registry }}/worker:{{ not (eq .Values.tap.docker.tag "") | ternary .Values.tap.docker.tag (include "kubeshark.defaultVersion" .) }}{{ include "kubeshark.dockerTagDebugVersion" . }}'
        {{- end }}
          imagePullPolicy: {{ .Values.tap.docker.imagePullPolicy }}
          name: tracer
          env:
          - name: POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          - name: PROFILING_ENABLED
            value: '{{ .Values.tap.pprof.enabled }}'
          - name: SENTRY_ENABLED
            value: '{{ (include "sentry.enabled" .) }}'
          - name: SENTRY_ENVIRONMENT
            value: '{{ .Values.tap.sentry.environment }}'
          resources:
            limits:
              {{ if ne (toString .Values.tap.resources.tracer.limits.cpu) "0" }}
              cpu: {{ .Values.tap.resources.tracer.limits.cpu }}
              {{ end }}
              {{ if ne (toString .Values.tap.resources.tracer.limits.memory) "0" }}
              memory: {{ .Values.tap.resources.tracer.limits.memory }}
              {{ end }}
            requests:
              {{ if ne (toString .Values.tap.resources.tracer.requests.cpu) "0" }}
              cpu: {{ .Values.tap.resources.tracer.requests.cpu }}
              {{ end }}
              {{ if ne (toString .Values.tap.resources.tracer.requests.memory) "0" }}
              memory: {{ .Values.tap.resources.tracer.requests.memory }}
              {{ end }}
          securityContext:
            privileged: {{ .Values.tap.securityContext.privileged }}
            {{- if not .Values.tap.securityContext.privileged }}
            {{- $aaProfile := .Values.tap.securityContext.appArmorProfile }}
            {{- $selinuxOpts := .Values.tap.securityContext.seLinuxOptions }}
            {{- if or (ne $aaProfile.type "") (ne $aaProfile.localhostProfile "") }}
            appArmorProfile:
              {{- if ne $aaProfile.type "" }}
              type: {{ $aaProfile.type }}
              {{- end }}
              {{- if ne $aaProfile.localhostProfile "" }}
              localhostProfile: {{ $aaProfile.localhostProfile }}
              {{- end }}
            {{- end }}
            {{- if or (ne $selinuxOpts.level "") (ne $selinuxOpts.role "") (ne $selinuxOpts.type "") (ne $selinuxOpts.user "") }}
            seLinuxOptions:
              {{- if ne $selinuxOpts.level "" }}
              level: {{ $selinuxOpts.level }}
              {{- end }}
              {{- if ne $selinuxOpts.role "" }}
              role: {{ $selinuxOpts.role }}
              {{- end }}
              {{- if ne $selinuxOpts.type "" }}
              type: {{ $selinuxOpts.type }}
              {{- end }}
              {{- if ne $selinuxOpts.user "" }}
              user: {{ $selinuxOpts.user }}
              {{- end }}
            {{- end }}
            capabilities:
              add:
                {{- range .Values.tap.securityContext.capabilities.ebpfCapture }}
                {{ print "- " . }}
                {{- end }}
                {{- range .Values.tap.securityContext.capabilities.networkCapture }}
                {{ print "- " . }}
                {{- end }}
              drop:
                - ALL
            {{- end }}
          volumeMounts:
            - mountPath: /hostproc
              name: proc
              readOnly: true
            - mountPath: /sys
              name: sys
              readOnly: true
              mountPropagation: HostToContainer
            - mountPath: /app/data
              name: data
            - mountPath: /etc/os-release
              name: os-release
              readOnly: true
            - mountPath: /hostroot
              mountPropagation: HostToContainer
              name: root
              readOnly: true
      {{- end }}
      dnsPolicy: ClusterFirstWithHostNet
      hostNetwork: true
      serviceAccountName: {{ include "kubeshark.serviceAccountName" . }}
      {{- if .Values.tap.priorityClass }}
      priorityClassName: {{ .Values.tap.priorityClass | quote }}
      {{- end }}
      {{- if .Values.tap.tolerations.workers }}
      tolerations:
      {{- range .Values.tap.tolerations.workers }}
        - key: {{ .key | quote }}
          operator: {{ .operator | quote }}
          {{- if .value }}
          value: {{ .value | quote }}
          {{- end }}
          {{- if .effect }}
          effect: {{ .effect | quote }}
          {{- end }}
          {{- if .tolerationSeconds }}
          tolerationSeconds: {{ .tolerationSeconds }}
          {{- end }}
      {{- end }}
      {{- end }}
{{- if gt (len .Values.tap.nodeSelectorTerms.workers) 0}}
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              {{- toYaml .Values.tap.nodeSelectorTerms.workers | nindent 12 }}
{{- end }}
      {{- if or .Values.tap.dns.nameservers .Values.tap.dns.searches .Values.tap.dns.options }}
      dnsConfig:
        {{- if .Values.tap.dns.nameservers }}
        nameservers:
        {{- range .Values.tap.dns.nameservers }}
          - {{ . | quote }}
        {{- end }}
        {{- end }}
        {{- if .Values.tap.dns.searches }}
        searches:
        {{- range .Values.tap.dns.searches }}
          - {{ . | quote }}
        {{- end }}
        {{- end }}
        {{- if .Values.tap.dns.options }}
        options:
        {{- range .Values.tap.dns.options }}
          - name: {{ .name | quote }}
            {{- if .value }}
            value: {{ .value | quote }}
            {{- end }}
        {{- end }}
        {{- end }}
      {{- end }}
      volumes:
        - hostPath:
            path: /proc
          name: proc
        - hostPath:
            path: /sys
          name: sys
        - name: lib-modules
          hostPath:
            path: /lib/modules
        - hostPath:
            path: /etc/os-release
          name: os-release
        - hostPath:
            path: /
          name: root
        - name: data
{{- if .Values.tap.persistentStorage }}
          persistentVolumeClaim:
            claimName: kubeshark-persistent-volume-claim
{{- else }}
          emptyDir:
            sizeLimit: {{ .Values.tap.storageLimit }}
{{- end }}
07070100000038000081A4000000000000000000000001689B9CB30000040C000000000000000000000000000000000000003A00000000kubeshark-cli-52.8.1/helm-chart/templates/10-ingress.yaml---
{{- if .Values.tap.ingress.enabled }}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.org/websocket-services: "kubeshark-front"
{{- if .Values.tap.annotations }}
    {{- toYaml .Values.tap.annotations | nindent 4 }}
{{- end }}
  {{- if .Values.tap.ingress.annotations }}
    {{- toYaml .Values.tap.ingress.annotations | nindent 4 }}
  {{- end }}
  labels:
    {{- include "kubeshark.labels" . | nindent 4 }}
  name: kubeshark-ingress
  namespace: {{ .Release.Namespace }}
spec:
  {{- if .Values.tap.ingress.className }}
  ingressClassName: {{ .Values.tap.ingress.className }}
  {{- end }}
  rules:
    - host: {{ .Values.tap.ingress.host }}
      http:
        paths:
          - backend:
              service:
                name: kubeshark-front
                port:
                  number: 80
            path: /
            pathType: Prefix
  {{- if .Values.tap.ingress.tls }}
  tls:
    {{- toYaml .Values.tap.ingress.tls | nindent 2 }}
  {{- end }}
status:
  loadBalancer: {}
{{- end }}
07070100000039000081A4000000000000000000000001689B9CB300000BA1000000000000000000000000000000000000004300000000kubeshark-cli-52.8.1/helm-chart/templates/11-nginx-config-map.yaml---
apiVersion: v1
kind: ConfigMap
metadata:
  name: kubeshark-nginx-config-map
  namespace: {{ .Release.Namespace }}
  labels:
    {{- include "kubeshark.labels" . | nindent 4 }}
data:
  default.conf: |
    server {
      listen 8080;
{{- if .Values.tap.ipv6 }}
      listen [::]:8080;
{{- end }}
      access_log /dev/stdout;
      error_log /dev/stdout;

      client_body_buffer_size     64k;
      client_header_buffer_size   32k;
      large_client_header_buffers 8 64k;

      location {{ default "" (((.Values.tap).routing).front).basePath }}/api {
        rewrite ^{{ default "" (((.Values.tap).routing).front).basePath }}/api(.*)$ $1 break;
        proxy_pass http://kubeshark-hub;
        proxy_set_header   X-Forwarded-For $remote_addr;
        proxy_set_header   Host $http_host;
        proxy_set_header Upgrade websocket;
        proxy_set_header Connection Upgrade;
        proxy_set_header  Authorization $http_authorization;
        proxy_pass_header Authorization;
        proxy_connect_timeout 4s;
        proxy_read_timeout 120s;
        proxy_send_timeout 12s;
        proxy_pass_request_headers      on;
      }

      location {{ default "" (((.Values.tap).routing).front).basePath }}/saml {
        rewrite ^{{ default "" (((.Values.tap).routing).front).basePath }}/saml(.*)$ /saml$1 break;
        proxy_pass http://kubeshark-hub;
        proxy_set_header   X-Forwarded-For $remote_addr;
        proxy_set_header   Host $http_host;
        proxy_connect_timeout 4s;
        proxy_read_timeout 120s;
        proxy_send_timeout 12s;
        proxy_pass_request_headers on;
      }

{{- if .Values.tap.auth.dexConfig }}
       location /dex {
        rewrite ^{{ default "" (((.Values.tap).routing).front).basePath }}/dex(.*)$ /dex$1 break;
        proxy_pass http://kubeshark-dex;
        proxy_set_header   X-Forwarded-For $remote_addr;
        proxy_set_header   Host $http_host;
        proxy_set_header Upgrade websocket;
        proxy_set_header Connection Upgrade;
        proxy_set_header  Authorization $http_authorization;
        proxy_pass_header Authorization;
        proxy_connect_timeout 4s;
        proxy_read_timeout 120s;
        proxy_send_timeout 12s;
        proxy_pass_request_headers      on;
      }
{{- end }}

{{- if (((.Values.tap).routing).front).basePath }}
      location {{ .Values.tap.routing.front.basePath }} {
        rewrite ^{{ .Values.tap.routing.front.basePath }}(.*)$ $1 break;
        root   /usr/share/nginx/html;
        index  index.html index.htm;
        try_files $uri $uri/ /index.html;
        expires -1;
        add_header Cache-Control no-cache;
      }
{{- end }}

      location / {
        root   /usr/share/nginx/html;
        index  index.html index.htm;
        try_files $uri $uri/ /index.html;
        expires -1;
        add_header Cache-Control no-cache;
      }
      error_page   500 502 503 504  /50x.html;
      location = /50x.html {
        root   /usr/share/nginx/html;
      }
    }

0707010000003A000081A4000000000000000000000001689B9CB300001598000000000000000000000000000000000000003D00000000kubeshark-cli-52.8.1/helm-chart/templates/12-config-map.yamlkind: ConfigMap
apiVersion: v1
metadata:
  name: {{ include "kubeshark.configmapName" . }}
  namespace: {{ .Release.Namespace }}
  labels:
    app.kubeshark.co/app: hub
    {{- include "kubeshark.labels" . | nindent 4 }}
data:
    POD_REGEX: '{{ .Values.tap.regex }}'
    NAMESPACES: '{{ gt (len .Values.tap.namespaces) 0 | ternary (join "," .Values.tap.namespaces) "" }}'
    EXCLUDED_NAMESPACES: '{{ gt (len .Values.tap.excludedNamespaces) 0 | ternary (join "," .Values.tap.excludedNamespaces) "" }}'
    BPF_OVERRIDE: '{{ .Values.tap.bpfOverride }}'
    STOPPED: '{{ .Values.tap.capture.stopped | ternary "true" "false" }}'
    SCRIPTING_SCRIPTS: '{}'
    SCRIPTING_ACTIVE_SCRIPTS: '{{ gt (len .Values.scripting.active) 0 | ternary (join "," .Values.scripting.active) "" }}'
    INGRESS_ENABLED: '{{ .Values.tap.ingress.enabled }}'
    INGRESS_HOST: '{{ .Values.tap.ingress.host }}'
    PROXY_FRONT_PORT: '{{ .Values.tap.proxy.front.port }}'
    AUTH_ENABLED: '{{- if and .Values.cloudLicenseEnabled (not (empty .Values.license)) -}}
                      {{ and .Values.tap.auth.enabled (eq .Values.tap.auth.type "dex") | ternary true false }}
                  {{- else -}}
                      {{ .Values.cloudLicenseEnabled | ternary "true" (.Values.tap.auth.enabled | ternary "true" "") }}
                  {{- end }}'
    AUTH_TYPE: '{{- if and .Values.cloudLicenseEnabled (not (eq .Values.tap.auth.type "dex")) -}}
                  default
                {{- else -}}
                  {{ .Values.tap.auth.type }}
                {{- end }}'
    AUTH_SAML_IDP_METADATA_URL: '{{ .Values.tap.auth.saml.idpMetadataUrl }}'
    AUTH_SAML_ROLE_ATTRIBUTE: '{{ .Values.tap.auth.saml.roleAttribute }}'
    AUTH_SAML_ROLES: '{{ .Values.tap.auth.saml.roles | toJson }}'
    AUTH_OIDC_ISSUER: '{{ default "not set" (((.Values.tap).auth).dexOidc).issuer }}'
    AUTH_OIDC_REFRESH_TOKEN_LIFETIME: '{{ default "3960h" (((.Values.tap).auth).dexOidc).refreshTokenLifetime }}'
    AUTH_OIDC_STATE_PARAM_EXPIRY: '{{ default "10m" (((.Values.tap).auth).dexOidc).oauth2StateParamExpiry }}'
    AUTH_OIDC_BYPASS_SSL_CA_CHECK: '{{- if and
                                      (hasKey .Values.tap "auth")
                                      (hasKey .Values.tap.auth "dexOidc")
                                      (hasKey .Values.tap.auth.dexOidc "bypassSslCaCheck")
                                    -}}
                                      {{ eq .Values.tap.auth.dexOidc.bypassSslCaCheck true | ternary "true" "false" }}
                                    {{- else -}}
                                      false
                                    {{- end }}'
    TELEMETRY_DISABLED: '{{ not .Values.internetConnectivity | ternary "true" (not .Values.tap.telemetry.enabled | ternary "true" "false") }}'
    SCRIPTING_DISABLED: '{{- if .Values.tap.liveConfigMapChangesDisabled -}}
                           {{- if .Values.demoModeEnabled -}}
                             {{ .Values.demoModeEnabled | ternary false true }}
                           {{- else -}}
                             true
                           {{- end }}
                         {{- else -}}
                           false
                         {{- end }}'
    TARGETED_PODS_UPDATE_DISABLED: '{{ .Values.tap.liveConfigMapChangesDisabled | ternary "true" "" }}'
    PRESET_FILTERS_CHANGING_ENABLED: '{{ .Values.tap.liveConfigMapChangesDisabled | ternary "false" "true" }}'
    RECORDING_DISABLED: '{{ .Values.tap.liveConfigMapChangesDisabled | ternary "true" "" }}'
    STOP_TRAFFIC_CAPTURING_DISABLED: '{{- if and .Values.tap.liveConfigMapChangesDisabled .Values.tap.capture.stopped -}}
                                        false
                                      {{- else -}}
                                        {{ .Values.tap.liveConfigMapChangesDisabled | ternary "true" "false" }}
                                      {{- end }}'
    GLOBAL_FILTER: {{ include "kubeshark.escapeDoubleQuotes" .Values.tap.globalFilter | quote }}
    DEFAULT_FILTER: {{ include "kubeshark.escapeDoubleQuotes" .Values.tap.defaultFilter | quote }}
    TRAFFIC_SAMPLE_RATE: '{{ .Values.tap.misc.trafficSampleRate }}'
    JSON_TTL: '{{ .Values.tap.misc.jsonTTL }}'
    PCAP_TTL: '{{ .Values.tap.misc.pcapTTL }}'
    PCAP_ERROR_TTL: '{{ .Values.tap.misc.pcapErrorTTL }}'
    TIMEZONE: '{{ not (eq .Values.timezone "") | ternary .Values.timezone " " }}'
    CLOUD_LICENSE_ENABLED: '{{- if and .Values.cloudLicenseEnabled (not (empty .Values.license)) -}}
                              false
                            {{- else -}}
                              {{ .Values.cloudLicenseEnabled }}
                            {{- end }}'
    AI_ASSISTANT_ENABLED: '{{ .Values.aiAssistantEnabled | ternary "true" "false" }}'
    DUPLICATE_TIMEFRAME: '{{ .Values.tap.misc.duplicateTimeframe }}'
    ENABLED_DISSECTORS: '{{ gt (len .Values.tap.enabledDissectors) 0 | ternary (join "," .Values.tap.enabledDissectors) "" }}'
    CUSTOM_MACROS: '{{ toJson .Values.tap.customMacros }}'
    DISSECTORS_UPDATING_ENABLED: '{{ .Values.tap.liveConfigMapChangesDisabled | ternary "false" "true" }}'
    DETECT_DUPLICATES: '{{ .Values.tap.misc.detectDuplicates | ternary "true" "false" }}'
    PCAP_DUMP_ENABLE: '{{ .Values.pcapdump.enabled }}'
    PCAP_TIME_INTERVAL: '{{ .Values.pcapdump.timeInterval }}'
    PCAP_MAX_TIME: '{{ .Values.pcapdump.maxTime }}'
    PCAP_MAX_SIZE: '{{ .Values.pcapdump.maxSize }}'
    PORT_MAPPING: '{{ toJson .Values.tap.portMapping }}'
0707010000003B000081A4000000000000000000000001689B9CB300000455000000000000000000000000000000000000003900000000kubeshark-cli-52.8.1/helm-chart/templates/13-secret.yamlkind: Secret
apiVersion: v1
metadata:
  name: {{ include "kubeshark.secretName" . }}
  namespace: {{ .Release.Namespace }}
  labels:
    app.kubeshark.co/app: hub
    {{- include "kubeshark.labels" . | nindent 4 }}
stringData:
    LICENSE: '{{ .Values.license }}'
    SCRIPTING_ENV: '{{ .Values.scripting.env | toJson }}'
    OIDC_CLIENT_ID: '{{ default "not set" (((.Values.tap).auth).dexOidc).clientId }}'
    OIDC_CLIENT_SECRET: '{{ default "not set" (((.Values.tap).auth).dexOidc).clientSecret }}'

---

kind: Secret
apiVersion: v1
metadata:
  name: kubeshark-saml-x509-crt-secret
  namespace: {{ .Release.Namespace }}
  labels:
    app.kubeshark.co/app: hub
    {{- include "kubeshark.labels" . | nindent 4 }}
stringData:
  AUTH_SAML_X509_CRT: |
    {{ .Values.tap.auth.saml.x509crt | nindent 4 }}

---

kind: Secret
apiVersion: v1
metadata:
  name: kubeshark-saml-x509-key-secret
  namespace: {{ .Release.Namespace }}
  labels:
    app.kubeshark.co/app: hub
    {{- include "kubeshark.labels" . | nindent 4 }}
stringData:
  AUTH_SAML_X509_KEY: |
    {{ .Values.tap.auth.saml.x509key | nindent 4 }}

---
0707010000003C000081A4000000000000000000000001689B9CB300000476000000000000000000000000000000000000005900000000kubeshark-cli-52.8.1/helm-chart/templates/14-openshift-security-context-constraints.yaml{{- if .Capabilities.APIVersions.Has "security.openshift.io/v1/SecurityContextConstraints" }}
apiVersion: security.openshift.io/v1
kind: SecurityContextConstraints
metadata:
  labels:
    {{- include "kubeshark.labels" . | nindent 4 }}
  annotations:
    "helm.sh/hook": pre-install
  {{- if .Values.tap.annotations }}
    {{- toYaml .Values.tap.annotations | nindent 4 }}
  {{- end }}
  name: kubeshark-scc
priority: 10
allowPrivilegedContainer: true
allowHostDirVolumePlugin: true
allowHostNetwork: true
allowHostPorts: true
allowHostPID: true
allowHostIPC: true
readOnlyRootFilesystem: false
requiredDropCapabilities:
  - MKNOD
allowedCapabilities:
  - NET_RAW
  - NET_ADMIN
  - SYS_ADMIN
  - SYS_PTRACE
  - DAC_OVERRIDE
  - SYS_RESOURCE
  - SYS_MODULE
  - IPC_LOCK
runAsUser:
  type: RunAsAny
fsGroup:
  type: MustRunAs
seLinuxContext:
  type: RunAsAny
supplementalGroups:
  type: RunAsAny
seccompProfiles:
- '*'
volumes:
  - configMap
  - downwardAPI
  - emptyDir
  - persistentVolumeClaim
  - secret
  - hostPath
  - projected
  - ephemeral
users:
  - system:serviceaccount:{{ .Release.Namespace }}:kubeshark-service-account
{{- end }}
0707010000003D000081A4000000000000000000000001689B9CB30000026C000000000000000000000000000000000000004900000000kubeshark-cli-52.8.1/helm-chart/templates/15-worker-service-metrics.yaml---
kind: Service
apiVersion: v1
metadata:
  labels:
    {{- include "kubeshark.labels" . | nindent 4 }}
  annotations:
    prometheus.io/scrape: 'true'
    prometheus.io/port: '{{ .Values.tap.metrics.port }}'
  {{- if .Values.tap.annotations }}
    {{- toYaml .Values.tap.annotations | nindent 4 }}
  {{- end }}
  name: kubeshark-worker-metrics
  namespace: {{ .Release.Namespace }}
spec:
  selector:
    app.kubeshark.co/app: worker
    {{- include "kubeshark.labels" . | nindent 4 }}
  ports:
  - name: metrics
    protocol: TCP
    port: {{ .Values.tap.metrics.port }}
    targetPort: {{ .Values.tap.metrics.port }}
0707010000003E000081A4000000000000000000000001689B9CB300000218000000000000000000000000000000000000004600000000kubeshark-cli-52.8.1/helm-chart/templates/16-hub-service-metrics.yaml---
kind: Service
apiVersion: v1
metadata:
  labels:
    {{- include "kubeshark.labels" . | nindent 4 }}
  annotations:
    prometheus.io/scrape: 'true'
    prometheus.io/port: '9100'
  {{- if .Values.tap.annotations }}
    {{- toYaml .Values.tap.annotations | nindent 4 }}
  {{- end }}
  name: kubeshark-hub-metrics
  namespace: {{ .Release.Namespace }}
spec:
  selector:
    app.kubeshark.co/app: hub
    {{- include "kubeshark.labels" . | nindent 4 }}
  ports:
  - name: metrics
    protocol: TCP
    port: 9100
    targetPort: 9100
0707010000003F000081A4000000000000000000000001689B9CB3000008D1000000000000000000000000000000000000004300000000kubeshark-cli-52.8.1/helm-chart/templates/17-network-policies.yamlapiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  labels:
    {{- include "kubeshark.labels" . | nindent 4 }}
  {{- if .Values.tap.annotations }}
  annotations:
    {{- toYaml .Values.tap.annotations | nindent 4 }}
  {{- end }}
  name: kubeshark-hub-network-policy
  namespace: {{ .Release.Namespace }}
spec:
  podSelector:
    matchLabels:
      app.kubeshark.co/app: hub
  policyTypes:
    - Ingress
    - Egress
  ingress:
    - ports:
        - protocol: TCP
          port: 8080
    - ports:
        - protocol: TCP
          port: 9100
  egress:
    - {}
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  labels:
    {{- include "kubeshark.labels" . | nindent 4 }}
  annotations:
  {{- if .Values.tap.annotations }}
    {{- toYaml .Values.tap.annotations | nindent 4 }}
  {{- end }}
  name: kubeshark-front-network-policy
  namespace: {{ .Release.Namespace }}
spec:
  podSelector:
    matchLabels:
      app.kubeshark.co/app: front
  policyTypes:
    - Ingress
    - Egress
  ingress:
    - ports:
        - protocol: TCP
          port: 8080
  egress:
    - {}
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  labels:
    {{- include "kubeshark.labels" . | nindent 4 }}
  annotations:
  {{- if .Values.tap.annotations }}
    {{- toYaml .Values.tap.annotations | nindent 4 }}
  {{- end }}
  name: kubeshark-dex-network-policy
  namespace: {{ .Release.Namespace }}
spec:
  podSelector:
    matchLabels:
      app.kubeshark.co/app: dex
  policyTypes:
    - Ingress
    - Egress
  ingress:
    - ports:
        - protocol: TCP
          port: 5556
  egress:
    - {}
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  labels:
    {{- include "kubeshark.labels" . | nindent 4 }}
  annotations:
  {{- if .Values.tap.annotations }}
    {{- toYaml .Values.tap.annotations | nindent 4 }}
  {{- end }}
  name: kubeshark-worker-network-policy
  namespace: {{ .Release.Namespace }}
spec:
  podSelector:
    matchLabels:
      app.kubeshark.co/app: worker
  policyTypes:
    - Ingress
    - Egress
  ingress:
    - ports:
        - protocol: TCP
          port: {{ .Values.tap.proxy.worker.srvPort }}
        - protocol: TCP
          port: {{ .Values.tap.metrics.port }}
  egress:
    - {}
07070100000040000081A4000000000000000000000001689B9CB300000401000000000000000000000000000000000000003E00000000kubeshark-cli-52.8.1/helm-chart/templates/18-cleanup-job.yaml{{ if .Values.tap.gitops.enabled -}}
apiVersion: batch/v1
kind: Job
metadata:
  name: kubeshark-cleanup-job
  annotations:
    "helm.sh/hook": pre-delete
    "helm.sh/hook-delete-policy": hook-succeeded
spec:
  template:
    spec:
      serviceAccountName: {{ include "kubeshark.serviceAccountName" . }}
      {{- if .Values.tap.priorityClass }}
      priorityClassName: {{ .Values.tap.priorityClass | quote }}
      {{- end }}
      restartPolicy: Never
      containers:
        - name: cleanup
        {{- if .Values.tap.docker.overrideImage.hub }}
          image: '{{ .Values.tap.docker.overrideImage.hub }}'
        {{- else if .Values.tap.docker.overrideTag.hub }}
          image: '{{ .Values.tap.docker.registry }}/hub:{{ .Values.tap.docker.overrideTag.hub }}'
        {{ else }}
          image: '{{ .Values.tap.docker.registry }}/hub:{{ not (eq .Values.tap.docker.tag "") | ternary .Values.tap.docker.tag (include "kubeshark.defaultVersion" .) }}'
        {{- end }}
          command: ["/app/cleanup"]
{{ end -}}
07070100000041000081A4000000000000000000000001689B9CB300000D3C000000000000000000000000000000000000004100000000kubeshark-cli-52.8.1/helm-chart/templates/18-dex-deployment.yaml{{- if .Values.tap.auth.dexConfig }}

---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app.kubeshark.co/app: dex
    {{- include "kubeshark.labels" . | nindent 4 }}
  {{- if .Values.tap.annotations }}
  annotations:
    {{- toYaml .Values.tap.annotations | nindent 4 }}
  {{- end }}
  name: {{ include "kubeshark.name" . }}-dex
  namespace: {{ .Release.Namespace }}
spec:
  replicas: 1  # Set the desired number of replicas
  selector:
    matchLabels:
      app.kubeshark.co/app: dex
      {{- include "kubeshark.selectorLabels" . | nindent 6 }}
  template:
    metadata:
      labels:
        app.kubeshark.co/app: dex
        {{- include "kubeshark.labels" . | nindent 8 }}
    spec:
      containers:
        - name: kubeshark-dex
          image: 'dexidp/dex:v2.42.0-alpine'
          ports:
            - name: http
              containerPort: 5556
              protocol: TCP
            - name: telemetry
              containerPort: 5558
              protocol: TCP
          args:
          - dex
          - serve
          - /etc/dex/dex-config.yaml
          imagePullPolicy: {{ .Values.tap.docker.imagePullPolicy }}
          volumeMounts:
            - name: dex-secret-conf-volume
              mountPath: /etc/dex/dex-config.yaml
              subPath: dex-config.yaml
              readOnly: true
          livenessProbe:
            httpGet:
              path: /healthz/live
              port: 5558
            periodSeconds: 1
            failureThreshold: 3
            successThreshold: 1
            initialDelaySeconds: 3
          readinessProbe:
            httpGet:
              path: /healthz/ready
              port: 5558
            periodSeconds: 1
            failureThreshold: 3
            successThreshold: 1
            initialDelaySeconds: 3
            timeoutSeconds: 1
          resources:
            limits:
              cpu: 750m
              memory: 1Gi
            requests:
              cpu: 50m
              memory: 50Mi
{{- if gt (len .Values.tap.nodeSelectorTerms.dex) 0}}
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              {{- toYaml .Values.tap.nodeSelectorTerms.dex | nindent 12 }}
{{- end }}
      {{- if or .Values.tap.dns.nameservers .Values.tap.dns.searches .Values.tap.dns.options }}
      dnsConfig:
        {{- if .Values.tap.dns.nameservers }}
        nameservers:
        {{- range .Values.tap.dns.nameservers }}
          - {{ . | quote }}
        {{- end }}
        {{- end }}
        {{- if .Values.tap.dns.searches }}
        searches:
        {{- range .Values.tap.dns.searches }}
          - {{ . | quote }}
        {{- end }}
        {{- end }}
        {{- if .Values.tap.dns.options }}
        options:
        {{- range .Values.tap.dns.options }}
          - name: {{ .name | quote }}
            {{- if .value }}
            value: {{ .value | quote }}
            {{- end }}
        {{- end }}
        {{- end }}
      {{- end }}
      volumes:
        - name: dex-secret-conf-volume
          secret:
            secretName: kubeshark-dex-conf-secret
      dnsPolicy: ClusterFirstWithHostNet
      serviceAccountName: {{ include "kubeshark.serviceAccountName" . }}
      {{- if .Values.tap.priorityClass }}
      priorityClassName: {{ .Values.tap.priorityClass | quote }}
      {{- end }}
{{- end }}
07070100000042000081A4000000000000000000000001689B9CB3000001F6000000000000000000000000000000000000003E00000000kubeshark-cli-52.8.1/helm-chart/templates/19-dex-service.yaml{{- if .Values.tap.auth.dexConfig }}

---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubeshark.co/app: dex
    {{- include "kubeshark.labels" . | nindent 4 }}
  {{- if .Values.tap.annotations }}
  annotations:
    {{- toYaml .Values.tap.annotations | nindent 4 }}
  {{- end }}
  name: kubeshark-dex
  namespace: {{ .Release.Namespace }}
spec:
  ports:
    - name: kubeshark-dex
      port: 80
      targetPort: 5556
  selector:
    app.kubeshark.co/app: dex
  type: ClusterIP

{{- end }}
07070100000043000081A4000000000000000000000001689B9CB300000150000000000000000000000000000000000000003D00000000kubeshark-cli-52.8.1/helm-chart/templates/20-dex-secret.yaml{{- if .Values.tap.auth.dexConfig }}

kind: Secret
apiVersion: v1
metadata:
  name: kubeshark-dex-conf-secret
  namespace: {{ .Release.Namespace }}
  labels:
    app.kubeshark.co/app: hub
    {{- include "kubeshark.labels" . | nindent 4 }}
data:
  dex-config.yaml: {{ .Values.tap.auth.dexConfig | toYaml | b64enc | quote }}

{{- end }}
07070100000044000081A4000000000000000000000001689B9CB300000873000000000000000000000000000000000000003400000000kubeshark-cli-52.8.1/helm-chart/templates/NOTES.txtThank you for installing {{ title .Chart.Name }}.

Registry: {{ .Values.tap.docker.registry }}
Tag: {{ not (eq .Values.tap.docker.tag "") | ternary .Values.tap.docker.tag (printf "v%s" .Chart.Version) }}
{{- if .Values.tap.docker.overrideTag.worker }}
Overridden worker tag: {{ .Values.tap.docker.overrideTag.worker }}
{{- end }}
{{- if .Values.tap.docker.overrideTag.hub }}
Overridden hub tag: {{ .Values.tap.docker.overrideTag.hub }}
{{- end }}
{{- if .Values.tap.docker.overrideTag.front }}
Overridden front tag: {{ .Values.tap.docker.overrideTag.front }}
{{- end }}
{{- if .Values.tap.docker.overrideImage.worker }}
Overridden worker image: {{ .Values.tap.docker.overrideImage.worker }}
{{- end }}
{{- if .Values.tap.docker.overrideImage.hub }}
Overridden hub image: {{ .Values.tap.docker.overrideImage.hub }}
{{- end }}
{{- if .Values.tap.docker.overrideImage.front }}
Overridden front image: {{ .Values.tap.docker.overrideImage.front }}
{{- end }}

Your deployment has been successful. The release is named `{{ .Release.Name }}` and it has been deployed in the `{{ .Release.Namespace }}` namespace.

Notices:
{{- if .Values.supportChatEnabled}}
- Support chat using Intercom is enabled. It can be disabled using `--set supportChatEnabled=false`
{{- end }}
{{- if eq .Values.license ""}}
- No license key was detected. You can either log-in/sign-up through the dashboard, or download the license key from https://console.kubeshark.co/ and add it as `LICENSE` via mounted secret (`tap.secrets`).
{{- end }}

{{ if .Values.tap.ingress.enabled }}

You can now access the application through the following URL:
http{{ if .Values.tap.ingress.tls }}s{{ end }}://{{ .Values.tap.ingress.host }}{{ default "" (((.Values.tap).routing).front).basePath }}/

{{- else }}
To access the application, follow these steps:

1. Perform port forwarding with the following commands:

    kubectl port-forward -n {{ .Release.Namespace }} service/kubeshark-front 8899:80

2. Once port forwarding is done, you can access the application by visiting the following URL in your web browser:
    http://0.0.0.0:8899{{ default "" (((.Values.tap).routing).front).basePath }}/

{{- end }}
07070100000045000081A4000000000000000000000001689B9CB300000B97000000000000000000000000000000000000003700000000kubeshark-cli-52.8.1/helm-chart/templates/_helpers.tpl{{/*
Expand the name of the chart.
*/}}
{{- define "kubeshark.name" -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- end }}

{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "kubeshark.fullname" -}}
{{- printf "%s-%s" .Release.Name .Chart.Name | trunc 63 | trimSuffix "-" }}
{{- end }}

{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "kubeshark.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}

{{/*
Common labels
*/}}
{{- define "kubeshark.labels" -}}
helm.sh/chart: {{ include "kubeshark.chart" . }}
{{ include "kubeshark.selectorLabels" . }}
app.kubernetes.io/version: {{ .Chart.Version | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- if .Values.tap.labels }}
{{ toYaml .Values.tap.labels }}
{{- end }}
{{- end }}

{{/*
Selector labels
*/}}
{{- define "kubeshark.selectorLabels" -}}
app.kubernetes.io/name: {{ include "kubeshark.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}

{{/*
Create the name of the service account to use
*/}}
{{- define "kubeshark.serviceAccountName" -}}
{{- printf "%s-service-account" .Release.Name }}
{{- end }}

{{/*
Set configmap and secret names based on gitops.enabled
*/}}
{{- define "kubeshark.configmapName" -}}
kubeshark-config-map{{ if .Values.tap.gitops.enabled }}-default{{ end }}
{{- end -}}

{{- define "kubeshark.secretName" -}}
kubeshark-secret{{ if .Values.tap.gitops.enabled }}-default{{ end }}
{{- end -}}


{{/*
Escape double quotes in a string
*/}}
{{- define "kubeshark.escapeDoubleQuotes" -}}
  {{- regexReplaceAll "\"" . "\"" -}}
{{- end -}}

{{/*
Define debug docker tag suffix
*/}}
{{- define "kubeshark.dockerTagDebugVersion" -}}
{{- .Values.tap.pprof.enabled | ternary "-debug" "" }}
{{- end -}}

{{/*
Create docker tag default version
*/}}
{{- define "kubeshark.defaultVersion" -}}
{{- $defaultVersion := (printf "v%s" .Chart.Version) -}}
{{- if .Values.tap.docker.tagLocked }}
  {{- $defaultVersion = regexReplaceAll "^([^.]+\\.[^.]+).*" $defaultVersion "$1" -}}
{{- end }}
{{- $defaultVersion }}
{{- end -}}

{{/*
Set sentry based on internet connectivity and telemetry
*/}}
{{- define "sentry.enabled" -}}
  {{- $sentryEnabledVal := .Values.tap.sentry.enabled -}}
  {{- if not .Values.internetConnectivity -}}
    {{- $sentryEnabledVal = false -}}
  {{- else if not .Values.tap.telemetry.enabled -}}
    {{- $sentryEnabledVal = false -}}
  {{- end -}}
  {{- $sentryEnabledVal -}}
{{- end -}}

{{/*
Dex IdP: retrieve a secret for static client with a specific ID
*/}}
{{- define "getDexKubesharkStaticClientSecret" -}}
  {{- $clientId := .clientId -}}
  {{- range .clients }}
    {{- if eq .id $clientId }}
      {{- .secret }}
    {{- end }}
  {{- end }}
{{- end }}
07070100000046000081A4000000000000000000000001689B9CB30000134F000000000000000000000000000000000000002C00000000kubeshark-cli-52.8.1/helm-chart/values.yaml# find a detailed description here: https://github.com/kubeshark/kubeshark/blob/master/helm-chart/README.md
tap:
  docker:
    registry: docker.io/kubeshark
    tag: ""
    tagLocked: true
    imagePullPolicy: Always
    imagePullSecrets: []
    overrideImage:
      worker: ""
      hub: ""
      front: ""
    overrideTag:
      worker: ""
      hub: ""
      front: ""
  proxy:
    worker:
      srvPort: 48999
    hub:
      srvPort: 8898
    front:
      port: 8899
    host: 127.0.0.1
  regex: .*
  namespaces: []
  excludedNamespaces: []
  bpfOverride: ""
  capture:
    stopped: false
    stopAfter: 5m
  release:
    repo: https://helm.kubeshark.co
    name: kubeshark
    namespace: default
  persistentStorage: false
  persistentStorageStatic: false
  persistentStoragePvcVolumeMode: FileSystem
  efsFileSytemIdAndPath: ""
  secrets: []
  storageLimit: 5Gi
  storageClass: standard
  dryRun: false
  dns:
    nameservers: []
    searches: []
    options: []
  resources:
    hub:
      limits:
        cpu: "0"
        memory: 5Gi
      requests:
        cpu: 50m
        memory: 50Mi
    sniffer:
      limits:
        cpu: "0"
        memory: 5Gi
      requests:
        cpu: 50m
        memory: 50Mi
    tracer:
      limits:
        cpu: "0"
        memory: 5Gi
      requests:
        cpu: 50m
        memory: 50Mi
  probes:
    hub:
      initialDelaySeconds: 5
      periodSeconds: 5
      successThreshold: 1
      failureThreshold: 3
    sniffer:
      initialDelaySeconds: 5
      periodSeconds: 5
      successThreshold: 1
      failureThreshold: 3
  serviceMesh: true
  tls: true
  disableTlsLog: true
  packetCapture: best
  labels: {}
  annotations: {}
  nodeSelectorTerms:
    hub:
    - matchExpressions:
      - key: kubernetes.io/os
        operator: In
        values:
        - linux
    workers:
    - matchExpressions:
      - key: kubernetes.io/os
        operator: In
        values:
        - linux
    front:
    - matchExpressions:
      - key: kubernetes.io/os
        operator: In
        values:
        - linux
    dex:
    - matchExpressions:
      - key: kubernetes.io/os
        operator: In
        values:
        - linux
  tolerations:
    hub: []
    workers:
    - operator: Exists
      effect: NoExecute
    front: []
  auth:
    enabled: false
    type: saml
    saml:
      idpMetadataUrl: ""
      x509crt: ""
      x509key: ""
      roleAttribute: role
      roles:
        admin:
          filter: ""
          canDownloadPCAP: true
          canUseScripting: true
          scriptingPermissions:
            canSave: true
            canActivate: true
            canDelete: true
          canUpdateTargetedPods: true
          canStopTrafficCapturing: true
          showAdminConsoleLink: true
  ingress:
    enabled: false
    className: ""
    host: ks.svc.cluster.local
    tls: []
    annotations: {}
  priorityClass: ""
  routing:
    front:
      basePath: ""
  ipv6: true
  debug: false
  dashboard:
    completeStreamingEnabled: true
  telemetry:
    enabled: true
  resourceGuard:
    enabled: false
  watchdog:
    enabled: false
  gitops:
    enabled: false
  sentry:
    enabled: false
    environment: production
  defaultFilter: "!dns and !error"
  liveConfigMapChangesDisabled: false
  globalFilter: ""
  enabledDissectors:
  - amqp
  - dns
  - http
  - icmp
  - kafka
  - redis
  - ws
  - ldap
  - radius
  - diameter
  portMapping:
    http:
    - 80
    - 443
    - 8080
    amqp:
    - 5671
    - 5672
    kafka:
    - 9092
    redis:
    - 6379
    ldap:
    - 389
    diameter:
    - 3868
  customMacros:
    https: tls and (http or http2)
  metrics:
    port: 49100
  pprof:
    enabled: false
    port: 8000
    view: flamegraph
  misc:
    jsonTTL: 5m
    pcapTTL: 10s
    pcapErrorTTL: 60s
    trafficSampleRate: 100
    tcpStreamChannelTimeoutMs: 10000
    tcpStreamChannelTimeoutShow: false
    resolutionStrategy: auto
    duplicateTimeframe: 200ms
    detectDuplicates: false
    staleTimeoutSeconds: 30
  securityContext:
    privileged: true
    appArmorProfile:
      type: ""
      localhostProfile: ""
    seLinuxOptions:
      level: ""
      role: ""
      type: ""
      user: ""
    capabilities:
      networkCapture:
      - NET_RAW
      - NET_ADMIN
      serviceMeshCapture:
      - SYS_ADMIN
      - SYS_PTRACE
      - DAC_OVERRIDE
      ebpfCapture:
      - SYS_ADMIN
      - SYS_PTRACE
      - SYS_RESOURCE
      - IPC_LOCK
  mountBpf: true
logs:
  file: ""
  grep: ""
pcapdump:
  enabled: true
  timeInterval: 1m
  maxTime: 1h
  maxSize: 500MB
  time: time
  debug: false
  dest: ""
kube:
  configPath: ""
  context: ""
dumpLogs: false
headless: false
license: ""
cloudLicenseEnabled: true
aiAssistantEnabled: true
demoModeEnabled: false
supportChatEnabled: true
betaEnabled: false
internetConnectivity: true
scripting:
  env: {}
  source: ""
  sources: []
  watchScripts: true
  active: []
  console: true
timezone: ""
logLevel: warning
07070100000047000081A4000000000000000000000001689B9CB300000BF2000000000000000000000000000000000000002000000000kubeshark-cli-52.8.1/install.sh#!/bin/sh

EXE_NAME=kubeshark
ALIAS_NAME=ks
PROG_NAME=Kubeshark
INSTALL_PATH=/usr/local/bin/$EXE_NAME
ALIAS_PATH=/usr/local/bin/$ALIAS_NAME
REPO=https://github.com/kubeshark/kubeshark
OS=$(echo $(uname -s) | tr '[:upper:]' '[:lower:]')
ARCH=$(echo $(uname -m) | tr '[:upper:]' '[:lower:]')
SUPPORTED_PAIRS="linux_amd64 linux_arm64 darwin_amd64 darwin_arm64"

ESC="\033["
F_DEFAULT=39
F_RED=31
F_GREEN=32
F_YELLOW=33
B_DEFAULT=49
B_RED=41
B_BLUE=44
B_LIGHT_BLUE=104

if [ "$ARCH" = "x86_64" ]; then
    ARCH="amd64"
fi

if [ "$ARCH" = "aarch64" ]; then
    ARCH="arm64"
fi

echo $SUPPORTED_PAIRS | grep -w -q "${OS}_${ARCH}"

if [ $? != 0 ] ; then
	echo "\n${ESC}${F_RED}m🛑 Unsupported OS \"$OS\" or architecture \"$ARCH\". Failed to install $PROG_NAME.${ESC}${F_DEFAULT}m"
    echo "${ESC}${B_RED}mPlease report 🐛 to $REPO/issues${ESC}${F_DEFAULT}m"
	exit 1
fi

# Check for Homebrew and kubeshark installation
if command -v brew >/dev/null; then
    if brew list kubeshark &>/dev/null; then
        echo "📦 Found $PROG_NAME instance installed with Homebrew"
		echo "${ESC}${F_GREEN}m⬇️ Removing before installation with script${ESC}${F_DEFAULT}m"
        brew uninstall kubeshark
    fi
fi

echo "\n🦈 ${ESC}${F_DEFAULT};${B_BLUE}m Started to download $PROG_NAME ${ESC}${B_DEFAULT};${F_DEFAULT}m"

if curl -# --fail -Lo $EXE_NAME ${REPO}/releases/latest/download/${EXE_NAME}_${OS}_${ARCH} ; then
    chmod +x $PWD/$EXE_NAME
    echo "\n${ESC}${F_GREEN}m⬇️  $PROG_NAME is downloaded into $PWD/$EXE_NAME${ESC}${F_DEFAULT}m"
else
    echo "\n${ESC}${F_RED}m🛑 Couldn't download ${REPO}/releases/latest/download/${EXE_NAME}_${OS}_${ARCH}\n\
  ⚠️  Check your internet connection.\n\
  ⚠️  Make sure 'curl' command is available.\n\
  ⚠️  Make sure there is no directory named '${EXE_NAME}' in ${PWD}\n\
${ESC}${F_DEFAULT}m"
    echo "${ESC}${B_RED}mPlease report 🐛 to $REPO/issues${ESC}${F_DEFAULT}m"
    exit 1
fi

use_cmd=$EXE_NAME
printf "Do you want to install system-wide? Requires sudo 😇 (y/N)? "
old_stty_cfg=$(stty -g)
stty raw -echo ; answer=$(head -c 1) ; stty $old_stty_cfg
if echo "$answer" | grep -iq "^y" ;then
    echo "$answer"
    sudo mv ./$EXE_NAME $INSTALL_PATH || exit 1
    echo "${ESC}${F_GREEN}m$PROG_NAME is installed into $INSTALL_PATH${ESC}${F_DEFAULT}m\n"

	ls $ALIAS_PATH >> /dev/null 2>&1
	if [ $? != 0 ] ; then
		printf "Do you want to add 'ks' alias for Kubeshark? (y/N)? "
		old_stty_cfg=$(stty -g)
		stty raw -echo ; answer=$(head -c 1) ; stty $old_stty_cfg
		if echo "$answer" | grep -iq "^y" ; then
			echo "$answer"
			sudo ln -s $INSTALL_PATH $ALIAS_PATH

			use_cmd=$ALIAS_NAME
		else
			echo "$answer"
		fi
	else
		use_cmd=$ALIAS_NAME
	fi
else
	echo "$answer"
	use_cmd="./$EXE_NAME"
fi

echo "${ESC}${F_GREEN}m✅ You can use the ${ESC}${F_DEFAULT};${B_LIGHT_BLUE}m $use_cmd ${ESC}${B_DEFAULT};${F_GREEN}m command now.${ESC}${F_DEFAULT}m"
echo "\n${ESC}${F_YELLOW}mPlease give us a star 🌟 on ${ESC}${F_DEFAULT}m$REPO${ESC}${F_YELLOW}m if you ❤️  $PROG_NAME!${ESC}${F_DEFAULT}m"
07070100000048000041ED000000000000000000000002689B9CB300000000000000000000000000000000000000000000001E00000000kubeshark-cli-52.8.1/internal07070100000049000041ED000000000000000000000002689B9CB300000000000000000000000000000000000000000000002600000000kubeshark-cli-52.8.1/internal/connect0707010000004A000081A4000000000000000000000001689B9CB3000010B1000000000000000000000000000000000000002D00000000kubeshark-cli-52.8.1/internal/connect/hub.gopackage connect

import (
	"bytes"
	"encoding/json"
	"fmt"
	"io"
	"net/http"
	"net/url"
	"os"
	"time"

	"github.com/kubeshark/kubeshark/config"
	"github.com/kubeshark/kubeshark/utils"

	"github.com/rs/zerolog/log"
	v1 "k8s.io/api/core/v1"
)

type Connector struct {
	url     string
	retries int
	client  *http.Client
}

const DefaultRetries = 3
const DefaultTimeout = 2 * time.Second
const DefaultSleep = 1 * time.Second

func NewConnector(url string, retries int, timeout time.Duration) *Connector {
	return &Connector{
		url:     url,
		retries: retries,
		client: &http.Client{
			Timeout: timeout,
		},
	}
}

func (connector *Connector) TestConnection(path string) error {
	retriesLeft := connector.retries
	for retriesLeft > 0 {
		if isReachable, err := connector.isReachable(path); err != nil || !isReachable {
			log.Debug().Str("url", connector.url).Err(err).Msg("Not ready yet!")
		} else {
			log.Debug().Str("url", connector.url).Msg("Connection test passed successfully.")
			break
		}
		retriesLeft -= 1
		time.Sleep(5 * DefaultSleep)
	}

	if retriesLeft == 0 {
		return fmt.Errorf("Couldn't reach the URL: %s after %d retries!", connector.url, connector.retries)
	}
	return nil
}

func (connector *Connector) isReachable(path string) (bool, error) {
	targetUrl := fmt.Sprintf("%s%s", connector.url, path)
	if _, err := utils.Get(targetUrl, connector.client); err != nil {
		return false, err
	} else {
		return true, nil
	}
}

func (connector *Connector) PostWorkerPodToHub(pod *v1.Pod) {
	postWorkerUrl := fmt.Sprintf("%s/pods/worker", connector.url)

	if podMarshalled, err := json.Marshal(pod); err != nil {
		log.Error().Err(err).Msg("Failed to marshal the Worker pod:")
	} else {
		ok := false
		for !ok {
			var resp *http.Response
			if resp, err = utils.Post(postWorkerUrl, "application/json", bytes.NewBuffer(podMarshalled), connector.client, config.Config.License); err != nil || resp.StatusCode != http.StatusOK {
				if _, ok := err.(*url.Error); ok {
					break
				}
				log.Warn().Err(err).Msg("Failed sending the Worker pod to Hub. Retrying...")
			} else {
				log.Debug().Interface("worker-pod", pod).Msg("Reported worker pod to Hub:")
				return
			}
			time.Sleep(DefaultSleep)
		}
	}
}

type postLicenseRequest struct {
	License string `json:"license"`
}

func (connector *Connector) PostLicense(license string) {
	postLicenseUrl := fmt.Sprintf("%s/license", connector.url)

	payload := postLicenseRequest{
		License: license,
	}

	if payloadMarshalled, err := json.Marshal(payload); err != nil {
		log.Error().Err(err).Msg("Failed to marshal the payload:")
	} else {
		ok := false
		for !ok {
			var resp *http.Response
			if resp, err = utils.Post(postLicenseUrl, "application/json", bytes.NewBuffer(payloadMarshalled), connector.client, config.Config.License); err != nil || resp.StatusCode != http.StatusOK {
				if _, ok := err.(*url.Error); ok {
					break
				}
				log.Warn().Err(err).Msg("Failed sending the license to Hub. Retrying...")
			} else {
				log.Debug().Str("license", license).Msg("Reported license to Hub:")
				return
			}
			time.Sleep(DefaultSleep)
		}
	}
}

func (connector *Connector) PostPcapsMerge(out *os.File) {
	postEnvUrl := fmt.Sprintf("%s/pcaps/merge", connector.url)

	if envMarshalled, err := json.Marshal(map[string]string{"query": ""}); err != nil {
		log.Error().Err(err).Msg("Failed to marshal the env:")
	} else {
		ok := false
		for !ok {
			var resp *http.Response
			if resp, err = utils.Post(postEnvUrl, "application/json", bytes.NewBuffer(envMarshalled), connector.client, config.Config.License); err != nil || resp.StatusCode != http.StatusOK {
				if _, ok := err.(*url.Error); ok {
					break
				}
				log.Warn().Err(err).Msg("Failed exported PCAP download. Retrying...")
			} else {
				defer resp.Body.Close()

				// Check server response
				if resp.StatusCode != http.StatusOK {
					log.Error().Str("status", resp.Status).Err(err).Msg("Failed exported PCAP download.")
					return
				}

				// Writer the body to file
				_, err = io.Copy(out, resp.Body)
				if err != nil {
					log.Error().Err(err).Msg("Failed writing PCAP export:")
					return
				}
				log.Info().Str("path", out.Name()).Msg("Downloaded exported PCAP:")
				return
			}
			time.Sleep(DefaultSleep)
		}
	}
}
0707010000004B000081ED000000000000000000000001689B9CB300000265000000000000000000000000000000000000002000000000kubeshark-cli-52.8.1/kubectl.sh#!/bin/bash

# Useful kubectl commands for Kubeshark development

# This command outputs all Kubernetes resources using YAML format and pipes it to VS Code
if [ $1 = "view-all-resources" ] ; then
  kubectl get $(kubectl api-resources | awk '{print $1}' | tail -n +2 | tr '\n' ',' | sed s/,\$//) -o yaml |  code -
fi

# This command outputs all Kubernetes resources in "kubeshark" namespace using YAML format and pipes it to VS Code
if [[ $1 = "view-kubeshark-resources" ]] ; then
  kubectl get $(kubectl api-resources | awk '{print $1}' | tail -n +2 | tr '\n' ',' | sed s/,\$//) -n kubeshark -o yaml |  code -
fi
0707010000004C000041ED000000000000000000000002689B9CB300000000000000000000000000000000000000000000002000000000kubeshark-cli-52.8.1/kubernetes0707010000004D000081A4000000000000000000000001689B9CB3000010B8000000000000000000000000000000000000002A00000000kubeshark-cli-52.8.1/kubernetes/config.gopackage kubernetes

import (
	"context"
	"encoding/json"
	"slices"
	"strings"

	"github.com/kubeshark/kubeshark/config"
	"github.com/kubeshark/kubeshark/misc"
	"github.com/rs/zerolog/log"
	v1 "k8s.io/api/core/v1"
	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)

const (
	SUFFIX_SECRET                     = "secret"
	SUFFIX_CONFIG_MAP                 = "config-map"
	SECRET_LICENSE                    = "LICENSE"
	CONFIG_POD_REGEX                  = "POD_REGEX"
	CONFIG_NAMESPACES                 = "NAMESPACES"
	CONFIG_EXCLUDED_NAMESPACES        = "EXCLUDED_NAMESPACES"
	CONFIG_SCRIPTING_ENV              = "SCRIPTING_ENV"
	CONFIG_INGRESS_ENABLED            = "INGRESS_ENABLED"
	CONFIG_INGRESS_HOST               = "INGRESS_HOST"
	CONFIG_PROXY_FRONT_PORT           = "PROXY_FRONT_PORT"
	CONFIG_AUTH_ENABLED               = "AUTH_ENABLED"
	CONFIG_AUTH_TYPE                  = "AUTH_TYPE"
	CONFIG_AUTH_SAML_IDP_METADATA_URL = "AUTH_SAML_IDP_METADATA_URL"
	CONFIG_SCRIPTING_SCRIPTS          = "SCRIPTING_SCRIPTS"
	CONFIG_SCRIPTING_ACTIVE_SCRIPTS   = "SCRIPTING_ACTIVE_SCRIPTS"
	CONFIG_PCAP_DUMP_ENABLE           = "PCAP_DUMP_ENABLE"
	CONFIG_TIME_INTERVAL              = "TIME_INTERVAL"
	CONFIG_MAX_TIME                   = "MAX_TIME"
	CONFIG_MAX_SIZE                   = "MAX_SIZE"
)

func SetSecret(provider *Provider, key string, value string) (updated bool, err error) {
	var secret *v1.Secret
	secret, err = provider.clientSet.CoreV1().Secrets(config.Config.Tap.Release.Namespace).Get(context.TODO(), SELF_RESOURCES_PREFIX+SUFFIX_SECRET, metav1.GetOptions{})
	if err != nil {
		return
	}

	if secret.StringData[key] != value {
		updated = true
	}
	secret.Data[key] = []byte(value)

	_, err = provider.clientSet.CoreV1().Secrets(config.Config.Tap.Release.Namespace).Update(context.TODO(), secret, metav1.UpdateOptions{})
	if err == nil {
		if updated {
			log.Info().Str("secret", key).Str("value", value).Msg("Updated:")
		}
	} else {
		log.Error().Str("secret", key).Err(err).Send()
	}
	return
}

func GetConfig(provider *Provider, key string) (value string, err error) {
	var configMap *v1.ConfigMap
	configMap, err = provider.clientSet.CoreV1().ConfigMaps(config.Config.Tap.Release.Namespace).Get(context.TODO(), SELF_RESOURCES_PREFIX+SUFFIX_CONFIG_MAP, metav1.GetOptions{})
	if err != nil {
		return
	}

	value = configMap.Data[key]
	return
}

func SetConfig(provider *Provider, key string, value string) (updated bool, err error) {
	var configMap *v1.ConfigMap
	configMap, err = provider.clientSet.CoreV1().ConfigMaps(config.Config.Tap.Release.Namespace).Get(context.TODO(), SELF_RESOURCES_PREFIX+SUFFIX_CONFIG_MAP, metav1.GetOptions{})
	if err != nil {
		return
	}

	if configMap.Data[key] != value {
		updated = true
	}
	configMap.Data[key] = value

	_, err = provider.clientSet.CoreV1().ConfigMaps(config.Config.Tap.Release.Namespace).Update(context.TODO(), configMap, metav1.UpdateOptions{})
	if err == nil {
		if updated {
			log.Info().
				Str("config", key).
				Str("value", func() string {
					if len(value) > 10 {
						return value[:10]
					}
					return value
				}()).
				Int("length", len(value)).
				Msg("Updated. Printing only 10 first characters of value:")
		}
	} else {
		log.Error().Str("config", key).Err(err).Send()
	}
	return
}

func ConfigGetScripts(provider *Provider) (scripts map[int64]misc.ConfigMapScript, err error) {
	var data string
	data, err = GetConfig(provider, CONFIG_SCRIPTING_SCRIPTS)
	if err != nil {
		return
	}

	err = json.Unmarshal([]byte(data), &scripts)
	return
}

func IsActiveScript(provider *Provider, title string) bool {
	configActiveScripts, err := GetConfig(provider, CONFIG_SCRIPTING_ACTIVE_SCRIPTS)
	if err != nil {
		return false
	}
	return strings.Contains(configActiveScripts, title)
}

func DeleteActiveScriptByTitle(provider *Provider, title string) (err error) {
	configActiveScripts, err := GetConfig(provider, CONFIG_SCRIPTING_ACTIVE_SCRIPTS)
	if err != nil {
		return err
	}
	activeScripts := strings.Split(configActiveScripts, ",")

	idx := slices.Index(activeScripts, title)
	if idx != -1 {
		activeScripts = slices.Delete(activeScripts, idx, idx+1)
		_, err = SetConfig(provider, CONFIG_SCRIPTING_ACTIVE_SCRIPTS, strings.Join(activeScripts, ","))
		if err != nil {
			return err
		}
	}
	return nil
}
0707010000004E000081A4000000000000000000000001689B9CB300000194000000000000000000000000000000000000002A00000000kubeshark-cli-52.8.1/kubernetes/consts.gopackage kubernetes

const (
	SELF_RESOURCES_PREFIX      = "kubeshark-"
	FrontPodName               = SELF_RESOURCES_PREFIX + "front"
	FrontServiceName           = FrontPodName
	HubPodName                 = SELF_RESOURCES_PREFIX + "hub"
	HubServiceName             = HubPodName
	K8sAllNamespaces           = ""
	MinKubernetesServerVersion = "1.16.0"
	AppLabelKey                = "app.kubeshark.co/app"
)
0707010000004F000081A4000000000000000000000001689B9CB300000F8E000000000000000000000000000000000000002600000000kubeshark-cli-52.8.1/kubernetes/cp.gopackage kubernetes

import (
	"archive/tar"
	"bufio"
	"context"
	"fmt"
	"io"
	"os"
	"path"
	"path/filepath"
	"strings"

	"github.com/rs/zerolog/log"
	v1 "k8s.io/api/core/v1"
	"k8s.io/client-go/kubernetes/scheme"
	"k8s.io/client-go/tools/remotecommand"
)

func CopyFromPod(ctx context.Context, provider *Provider, pod v1.Pod, srcPath string, dstPath string) error {
	const containerName = "sniffer"
	cmdArr := []string{"tar", "cf", "-", srcPath}
	req := provider.clientSet.CoreV1().RESTClient().
		Post().
		Namespace(pod.Namespace).
		Resource("pods").
		Name(pod.Name).
		SubResource("exec").
		VersionedParams(&v1.PodExecOptions{
			Container: containerName,
			Command:   cmdArr,
			Stdin:     true,
			Stdout:    true,
			Stderr:    true,
			TTY:       false,
		}, scheme.ParameterCodec)

	exec, err := remotecommand.NewSPDYExecutor(&provider.clientConfig, "POST", req.URL())
	if err != nil {
		return err
	}

	reader, outStream := io.Pipe()
	errReader, errStream := io.Pipe()
	go logErrors(errReader, pod)
	go func() {
		defer outStream.Close()
		err = exec.StreamWithContext(ctx, remotecommand.StreamOptions{
			Stdin:  os.Stdin,
			Stdout: outStream,
			Stderr: errStream,
			Tty:    false,
		})
		if err != nil {
			log.Error().Err(err).Str("pod", pod.Name).Msg("SPDYExecutor:")
		}
	}()

	prefix := getPrefix(srcPath)
	prefix = path.Clean(prefix)
	prefix = stripPathShortcuts(prefix)
	dstPath = path.Join(dstPath, path.Base(prefix))
	err = untarAll(reader, dstPath, prefix)
	// fo(reader)
	return err
}

// func fo(fi io.Reader) {
// 	fo, err := os.Create("output.tar")
// 	if err != nil {
// 		panic(err)
// 	}

// 	// make a buffer to keep chunks that are read
// 	buf := make([]byte, 1024)
// 	for {
// 		// read a chunk
// 		n, err := fi.Read(buf)
// 		if err != nil && err != io.EOF {
// 			panic(err)
// 		}
// 		if n == 0 {
// 			break
// 		}

// 		// write a chunk
// 		if _, err := fo.Write(buf[:n]); err != nil {
// 			panic(err)
// 		}
// 	}
// }

func logErrors(reader io.Reader, pod v1.Pod) {
	r := bufio.NewReader(reader)
	for {
		msg, _, err := r.ReadLine()
		log.Warn().Str("pod", pod.Name).Str("msg", string(msg)).Msg("SPDYExecutor:")
		if err != nil {
			if err != io.EOF {
				log.Error().Err(err).Send()
			}
			return
		}
	}
}

func untarAll(reader io.Reader, destDir, prefix string) error {
	tarReader := tar.NewReader(reader)
	for {
		header, err := tarReader.Next()
		if err != nil {
			if err != io.EOF {
				return err
			}
			break
		}

		if !strings.HasPrefix(header.Name, prefix) {
			return fmt.Errorf("tar contents corrupted")
		}

		mode := header.FileInfo().Mode()
		destFileName := filepath.Join(destDir, header.Name[len(prefix):])

		baseName := filepath.Dir(destFileName)
		if err := os.MkdirAll(baseName, 0755); err != nil {
			return err
		}
		if header.FileInfo().IsDir() {
			if err := os.MkdirAll(destFileName, 0755); err != nil {
				return err
			}
			continue
		}

		evaledPath, err := filepath.EvalSymlinks(baseName)
		if err != nil {
			return err
		}

		if mode&os.ModeSymlink != 0 {
			linkname := header.Linkname

			if !filepath.IsAbs(linkname) {
				_ = filepath.Join(evaledPath, linkname)
			}

			if err := os.Symlink(linkname, destFileName); err != nil {
				return err
			}
		} else {
			outFile, err := os.Create(destFileName)
			if err != nil {
				return err
			}
			defer outFile.Close()
			if _, err := io.Copy(outFile, tarReader); err != nil {
				return err
			}
			if err := outFile.Close(); err != nil {
				return err
			}
		}
	}

	return nil
}

func getPrefix(file string) string {
	return strings.TrimLeft(file, "/")
}

func stripPathShortcuts(p string) string {
	newPath := p
	trimmed := strings.TrimPrefix(newPath, "../")

	for trimmed != newPath {
		newPath = trimmed
		trimmed = strings.TrimPrefix(newPath, "../")
	}

	// trim leftover {".", ".."}
	if newPath == "." || newPath == ".." {
		newPath = ""
	}

	if len(newPath) > 0 && string(newPath[0]) == "/" {
		return newPath[1:]
	}

	return newPath
}
07070100000050000081A4000000000000000000000001689B9CB3000002CD000000000000000000000000000000000000002A00000000kubeshark-cli-52.8.1/kubernetes/errors.gopackage kubernetes

type K8sTapManagerErrorReason string

const (
	TapManagerWorkerUpdateError K8sTapManagerErrorReason = "WORKER_UPDATE_ERROR"
	TapManagerPodWatchError     K8sTapManagerErrorReason = "POD_WATCH_ERROR"
	TapManagerPodListError      K8sTapManagerErrorReason = "POD_LIST_ERROR"
)

type K8sTapManagerError struct {
	OriginalError    error
	TapManagerReason K8sTapManagerErrorReason
}

// K8sTapManagerError implements the Error interface.
func (e *K8sTapManagerError) Error() string {
	return e.OriginalError.Error()
}

type ClusterBehindProxyError struct{}

// ClusterBehindProxyError implements the Error interface.
func (e *ClusterBehindProxyError) Error() string {
	return "Cluster is behind proxy"
}
07070100000051000081A4000000000000000000000001689B9CB3000004C6000000000000000000000000000000000000003400000000kubeshark-cli-52.8.1/kubernetes/eventWatchHelper.gopackage kubernetes

import (
	"context"
	"regexp"
	"strings"

	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
	"k8s.io/apimachinery/pkg/watch"
)

type EventWatchHelper struct {
	kubernetesProvider *Provider
	NameRegexFilter    *regexp.Regexp
	Kind               string
}

func NewEventWatchHelper(kubernetesProvider *Provider, NameRegexFilter *regexp.Regexp, kind string) *EventWatchHelper {
	return &EventWatchHelper{
		kubernetesProvider: kubernetesProvider,
		NameRegexFilter:    NameRegexFilter,
		Kind:               kind,
	}
}

// Implements the EventFilterer Interface
func (wh *EventWatchHelper) Filter(wEvent *WatchEvent) (bool, error) {
	event, err := wEvent.ToEvent()
	if err != nil {
		return false, nil
	}
	if !wh.NameRegexFilter.MatchString(event.Name) {
		return false, nil
	}
	if !strings.EqualFold(event.Regarding.Kind, wh.Kind) {
		return false, nil
	}

	return true, nil
}

// Implements the WatchCreator Interface
func (wh *EventWatchHelper) NewWatcher(ctx context.Context, namespace string) (watch.Interface, error) {
	watcher, err := wh.kubernetesProvider.clientSet.EventsV1().Events(namespace).Watch(ctx, metav1.ListOptions{Watch: true})
	if err != nil {
		return nil, err
	}

	return watcher, nil
}
07070100000052000041ED000000000000000000000002689B9CB300000000000000000000000000000000000000000000002500000000kubeshark-cli-52.8.1/kubernetes/helm07070100000053000081A4000000000000000000000001689B9CB300001233000000000000000000000000000000000000002D00000000kubeshark-cli-52.8.1/kubernetes/helm/helm.gopackage helm

import (
	"encoding/json"
	"fmt"
	"os"
	"path/filepath"
	"regexp"
	"strings"

	"github.com/kubeshark/kubeshark/config"
	"github.com/kubeshark/kubeshark/misc"
	"github.com/pkg/errors"
	"github.com/rs/zerolog/log"
	"helm.sh/helm/v3/pkg/action"
	"helm.sh/helm/v3/pkg/chart"
	"helm.sh/helm/v3/pkg/chart/loader"
	"helm.sh/helm/v3/pkg/cli"
	"helm.sh/helm/v3/pkg/downloader"
	"helm.sh/helm/v3/pkg/getter"
	"helm.sh/helm/v3/pkg/kube"
	"helm.sh/helm/v3/pkg/registry"
	"helm.sh/helm/v3/pkg/release"
	"helm.sh/helm/v3/pkg/repo"
)

const ENV_HELM_DRIVER = "HELM_DRIVER"

var settings = cli.New()

type Helm struct {
	repo             string
	releaseName      string
	releaseNamespace string
}

func NewHelm(repo string, releaseName string, releaseNamespace string) *Helm {
	return &Helm{
		repo:             repo,
		releaseName:      releaseName,
		releaseNamespace: releaseNamespace,
	}
}

func parseOCIRef(chartRef string) (string, string, error) {
	refTagRegexp := regexp.MustCompile(`^(oci://[^:]+(:[0-9]{1,5})?[^:]+):(.*)$`)
	caps := refTagRegexp.FindStringSubmatch(chartRef)
	if len(caps) != 4 {
		return "", "", errors.Errorf("improperly formatted oci chart reference: %s", chartRef)
	}
	chartRef = caps[1]
	tag := caps[3]

	return chartRef, tag, nil
}

func (h *Helm) Install() (rel *release.Release, err error) {
	kubeConfigPath := config.Config.KubeConfigPath()
	actionConfig := new(action.Configuration)
	if err = actionConfig.Init(kube.GetConfig(kubeConfigPath, "", h.releaseNamespace), h.releaseNamespace, os.Getenv(ENV_HELM_DRIVER), func(format string, v ...interface{}) {
		log.Info().Msgf(format, v...)
	}); err != nil {
		return
	}

	client := action.NewInstall(actionConfig)
	client.Namespace = h.releaseNamespace
	client.ReleaseName = h.releaseName

	chartPath := os.Getenv(fmt.Sprintf("%s_HELM_CHART_PATH", strings.ToUpper(misc.Program)))
	if chartPath == "" {
		var chartURL string
		chartURL, err = repo.FindChartInRepoURL(h.repo, h.releaseName, "", "", "", "", getter.All(&cli.EnvSettings{}))
		if err != nil {
			return
		}

		var cp string
		cp, err = client.ChartPathOptions.LocateChart(chartURL, settings)
		if err != nil {
			return
		}

		m := &downloader.Manager{
			Out:              os.Stdout,
			ChartPath:        cp,
			Keyring:          client.ChartPathOptions.Keyring,
			SkipUpdate:       false,
			Getters:          getter.All(settings),
			RepositoryConfig: settings.RepositoryConfig,
			RepositoryCache:  settings.RepositoryCache,
			Debug:            settings.Debug,
		}

		dl := downloader.ChartDownloader{
			Out:              m.Out,
			Verify:           m.Verify,
			Keyring:          m.Keyring,
			RepositoryConfig: m.RepositoryConfig,
			RepositoryCache:  m.RepositoryCache,
			RegistryClient:   m.RegistryClient,
			Getters:          m.Getters,
			Options: []getter.Option{
				getter.WithInsecureSkipVerifyTLS(false),
			},
		}

		repoPath := filepath.Dir(m.ChartPath)
		err = os.MkdirAll(repoPath, os.ModePerm)
		if err != nil {
			return
		}

		version := ""
		if registry.IsOCI(chartURL) {
			chartURL, version, err = parseOCIRef(chartURL)
			if err != nil {
				return
			}
			dl.Options = append(dl.Options,
				getter.WithRegistryClient(m.RegistryClient),
				getter.WithTagName(version))
		}

		log.Info().
			Str("url", chartURL).
			Str("repo-path", repoPath).
			Msg("Downloading Helm chart:")

		if _, _, err = dl.DownloadTo(chartURL, version, repoPath); err != nil {
			return
		}

		chartPath = m.ChartPath
	}
	var chart *chart.Chart
	chart, err = loader.Load(chartPath)
	if err != nil {
		return
	}

	log.Info().
		Str("release", chart.Metadata.Name).
		Str("version", chart.Metadata.Version).
		Strs("source", chart.Metadata.Sources).
		Str("kube-version", chart.Metadata.KubeVersion).
		Msg("Installing using Helm:")

	var configMarshalled []byte
	configMarshalled, err = json.Marshal(config.Config)
	if err != nil {
		return
	}

	var configUnmarshalled map[string]interface{}
	err = json.Unmarshal(configMarshalled, &configUnmarshalled)
	if err != nil {
		return
	}

	rel, err = client.Run(chart, configUnmarshalled)
	if err != nil {
		return
	}

	return
}

func (h *Helm) Uninstall() (resp *release.UninstallReleaseResponse, err error) {
	kubeConfigPath := config.Config.KubeConfigPath()
	actionConfig := new(action.Configuration)
	if err = actionConfig.Init(kube.GetConfig(kubeConfigPath, "", h.releaseNamespace), h.releaseNamespace, os.Getenv(ENV_HELM_DRIVER), func(format string, v ...interface{}) {
		log.Info().Msgf(format, v...)
	}); err != nil {
		return
	}

	client := action.NewUninstall(actionConfig)

	resp, err = client.Run(h.releaseName)
	if err != nil {
		return
	}

	return
}
07070100000054000081A4000000000000000000000001689B9CB300000410000000000000000000000000000000000000003200000000kubeshark-cli-52.8.1/kubernetes/podWatchHelper.gopackage kubernetes

import (
	"context"
	"regexp"

	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
	"k8s.io/apimachinery/pkg/watch"
)

type PodWatchHelper struct {
	kubernetesProvider *Provider
	NameRegexFilter    *regexp.Regexp
}

func NewPodWatchHelper(kubernetesProvider *Provider, NameRegexFilter *regexp.Regexp) *PodWatchHelper {
	return &PodWatchHelper{
		kubernetesProvider: kubernetesProvider,
		NameRegexFilter: NameRegexFilter,
	}
}

// Implements the EventFilterer Interface
func (wh *PodWatchHelper) Filter(wEvent *WatchEvent) (bool, error) {
	pod, err := wEvent.ToPod()
	if err != nil {
		return false, nil
	}

	if !wh.NameRegexFilter.MatchString(pod.Name) {
		return false, nil
	}

	return true, nil
}

// Implements the WatchCreator Interface
func (wh *PodWatchHelper) NewWatcher(ctx context.Context, namespace string) (watch.Interface, error) {
	watcher, err := wh.kubernetesProvider.clientSet.CoreV1().Pods(namespace).Watch(ctx, metav1.ListOptions{Watch: true})
	if err != nil {
		return nil, err
	}

	return watcher, nil
}
07070100000055000081A4000000000000000000000001689B9CB300002565000000000000000000000000000000000000002C00000000kubeshark-cli-52.8.1/kubernetes/provider.gopackage kubernetes

import (
	"bufio"
	"bytes"
	"context"
	"fmt"
	"io"
	"net/url"
	"path/filepath"
	"regexp"
	"strings"

	"github.com/kubeshark/kubeshark/config"
	"github.com/kubeshark/kubeshark/misc"
	"github.com/kubeshark/kubeshark/semver"
	"github.com/kubeshark/kubeshark/utils"
	"github.com/rs/zerolog/log"
	"github.com/tanqiangyes/grep-go/reader"
	core "k8s.io/api/core/v1"
	k8serrors "k8s.io/apimachinery/pkg/api/errors"
	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
	"k8s.io/apimachinery/pkg/version"
	"k8s.io/client-go/kubernetes"
	_ "k8s.io/client-go/plugin/pkg/client/auth"
	"k8s.io/client-go/rest"
	"k8s.io/client-go/tools/clientcmd"
)

type Provider struct {
	clientSet        *kubernetes.Clientset
	kubernetesConfig clientcmd.ClientConfig
	clientConfig     rest.Config
	managedBy        string
	createdBy        string
}

func NewProvider(kubeConfigPath string, contextName string) (*Provider, error) {
	kubernetesConfig := loadKubernetesConfiguration(kubeConfigPath, contextName)
	restClientConfig, err := kubernetesConfig.ClientConfig()
	if err != nil {
		if clientcmd.IsEmptyConfig(err) {
			return nil, fmt.Errorf("couldn't find the kube config file, or file is empty (%s)\n"+
				"you can set alternative kube config file path by adding the kube-config-path field to the %s config file, err:  %w", kubeConfigPath, misc.Program, err)
		}
		if clientcmd.IsConfigurationInvalid(err) {
			return nil, fmt.Errorf("invalid kube config file (%s)\n"+
				"you can set alternative kube config file path by adding the kube-config-path field to the %s config file, err:  %w", kubeConfigPath, misc.Program, err)
		}

		return nil, fmt.Errorf("error while using kube config (%s)\n"+
			"you can set alternative kube config file path by adding the kube-config-path field to the %s config file, err:  %w", kubeConfigPath, misc.Program, err)
	}

	clientSet, err := getClientSet(restClientConfig)
	if err != nil {
		return nil, fmt.Errorf("error while using kube config (%s)\n"+
			"you can set alternative kube config file path by adding the kube-config-path field to the %s config file, err:  %w", kubeConfigPath, misc.Program, err)
	}

	log.Debug().
		Str("host", restClientConfig.Host).
		Str("api-path", restClientConfig.APIPath).
		Str("user-agent", restClientConfig.UserAgent).
		Msg("K8s client config.")

	return &Provider{
		clientSet:        clientSet,
		kubernetesConfig: kubernetesConfig,
		clientConfig:     *restClientConfig,
		managedBy:        misc.Program,
		createdBy:        misc.Program,
	}, nil
}

func (provider *Provider) DoesServiceExist(ctx context.Context, namespace string, name string) (bool, error) {
	serviceResource, err := provider.clientSet.CoreV1().Services(namespace).Get(ctx, name, metav1.GetOptions{})
	return provider.doesResourceExist(serviceResource, err)
}

func (provider *Provider) doesResourceExist(resource interface{}, err error) (bool, error) {
	// Getting NotFound error is the expected behavior when a resource does not exist.
	if k8serrors.IsNotFound(err) {
		return false, nil
	}

	if err != nil {
		return false, err
	}

	return resource != nil, nil
}

func (provider *Provider) listPodsImpl(ctx context.Context, regex *regexp.Regexp, namespaces []string, listOptions metav1.ListOptions) ([]core.Pod, error) {
	var pods []core.Pod
	for _, namespace := range namespaces {
		namespacePods, err := provider.clientSet.CoreV1().Pods(namespace).List(ctx, listOptions)
		if err != nil {
			return nil, fmt.Errorf("failed to get pods in ns: [%s], %w", namespace, err)
		}

		pods = append(pods, namespacePods.Items...)
	}

	matchingPods := make([]core.Pod, 0)
	for _, pod := range pods {
		if regex.MatchString(pod.Name) {
			matchingPods = append(matchingPods, pod)
		}
	}
	return matchingPods, nil
}

func (provider *Provider) ListAllPodsMatchingRegex(ctx context.Context, regex *regexp.Regexp, namespaces []string) ([]core.Pod, error) {
	return provider.listPodsImpl(ctx, regex, namespaces, metav1.ListOptions{})
}

func (provider *Provider) ListAllRunningPodsMatchingRegex(ctx context.Context, regex *regexp.Regexp, namespaces []string) ([]core.Pod, error) {
	pods, err := provider.ListAllPodsMatchingRegex(ctx, regex, namespaces)
	if err != nil {
		return nil, err
	}

	matchingPods := make([]core.Pod, 0)
	for _, pod := range pods {
		if IsPodRunning(&pod) {
			matchingPods = append(matchingPods, pod)
		}
	}
	return matchingPods, nil
}

func (provider *Provider) ListPodsByAppLabel(ctx context.Context, namespaces string, labels map[string]string) ([]core.Pod, error) {
	pods, err := provider.clientSet.CoreV1().Pods(namespaces).List(ctx, metav1.ListOptions{
		LabelSelector: metav1.FormatLabelSelector(
			&metav1.LabelSelector{
				MatchLabels: labels,
			},
		),
	})
	if err != nil {
		return nil, err
	}

	return pods.Items, err
}

func (provider *Provider) GetPodLogs(ctx context.Context, namespace string, podName string, containerName string, grep string) (string, error) {
	podLogOpts := core.PodLogOptions{Container: containerName}
	req := provider.clientSet.CoreV1().Pods(namespace).GetLogs(podName, &podLogOpts)
	podLogs, err := req.Stream(ctx)
	if err != nil {
		return "", fmt.Errorf("error opening log stream on ns: %s, pod: %s, %w", namespace, podName, err)
	}
	defer podLogs.Close()
	buf := new(bytes.Buffer)
	if _, err = io.Copy(buf, podLogs); err != nil {
		return "", fmt.Errorf("error copy information from podLogs to buf, ns: %s, pod: %s, %w", namespace, podName, err)
	}

	if grep != "" {
		finder, err := reader.NewFinder(grep, true, true)
		if err != nil {
			panic(err)
		}

		read, err := reader.NewStdReader(bufio.NewReader(buf), []reader.Finder{finder})
		if err != nil {
			panic(err)
		}
		read.Run()
		result := read.Result()[0]

		log.Info().Str("namespace", namespace).Str("pod", podName).Str("container", containerName).Int("lines", len(result.Lines)).Str("grep", grep).Send()
		return strings.Join(result.MatchString, "\n"), nil
	} else {
		log.Info().Str("namespace", namespace).Str("pod", podName).Str("container", containerName).Send()
		return buf.String(), nil
	}
}

func (provider *Provider) GetNamespaceEvents(ctx context.Context, namespace string) (string, error) {
	eventList, err := provider.clientSet.CoreV1().Events(namespace).List(ctx, metav1.ListOptions{})
	if err != nil {
		return "", fmt.Errorf("error getting events on ns: %s, %w", namespace, err)
	}

	return eventList.String(), nil
}

// ValidateNotProxy We added this after a customer tried to run kubeshark from lens, which used len's kube config, which have cluster server configuration, which points to len's local proxy.
// The workaround was to use the user's local default kube config.
// For now - we are blocking the option to run kubeshark through a proxy to k8s server
func (provider *Provider) ValidateNotProxy() error {
	kubernetesUrl, err := url.Parse(provider.clientConfig.Host)
	if err != nil {
		log.Debug().Err(err).Msg("While parsing Kubernetes host!")
		return nil
	}

	restProxyClientConfig, _ := provider.kubernetesConfig.ClientConfig()
	restProxyClientConfig.Host = kubernetesUrl.Host

	clientProxySet, err := getClientSet(restProxyClientConfig)
	if err == nil {
		proxyServerVersion, err := clientProxySet.ServerVersion()
		if err != nil {
			return nil
		}

		if *proxyServerVersion == (version.Info{}) {
			return &ClusterBehindProxyError{}
		}
	}

	return nil
}

func (provider *Provider) GetKubernetesVersion() (*semver.SemVersion, error) {
	serverVersion, err := provider.clientSet.ServerVersion()
	if err != nil {
		log.Debug().Err(err).Msg("While getting Kubernetes server version!")
		return nil, err
	}

	serverVersionSemVer := semver.SemVersion(serverVersion.GitVersion)
	return &serverVersionSemVer, nil
}

func (provider *Provider) GetNamespaces() (namespaces []string) {
	if len(config.Config.Tap.Namespaces) > 0 {
		namespaces = utils.Unique(config.Config.Tap.Namespaces)
	} else {
		namespaceList, err := provider.clientSet.CoreV1().Namespaces().List(context.TODO(), metav1.ListOptions{})
		if err != nil {
			log.Error().Err(err).Send()
			return
		}

		for _, ns := range namespaceList.Items {
			namespaces = append(namespaces, ns.Name)
		}
	}

	namespaces = utils.Diff(namespaces, config.Config.Tap.ExcludedNamespaces)

	return
}

func (provider *Provider) GetClientSet() *kubernetes.Clientset {
	return provider.clientSet
}

func getClientSet(config *rest.Config) (*kubernetes.Clientset, error) {
	clientSet, err := kubernetes.NewForConfig(config)
	if err != nil {
		return nil, err
	}

	return clientSet, nil
}

func ValidateKubernetesVersion(serverVersionSemVer *semver.SemVersion) error {
	minKubernetesServerVersionSemVer := semver.SemVersion(MinKubernetesServerVersion)
	if minKubernetesServerVersionSemVer.GreaterThan(*serverVersionSemVer) {
		return fmt.Errorf("kubernetes server version %v is not supported, supporting only kubernetes server version of %v or higher", serverVersionSemVer, MinKubernetesServerVersion)
	}

	return nil
}

func loadKubernetesConfiguration(kubeConfigPath string, context string) clientcmd.ClientConfig {
	configPathList := filepath.SplitList(kubeConfigPath)
	configLoadingRules := &clientcmd.ClientConfigLoadingRules{}
	if len(configPathList) <= 1 {
		configLoadingRules.ExplicitPath = kubeConfigPath
	} else {
		configLoadingRules.Precedence = configPathList
	}
	contextName := context
	return clientcmd.NewNonInteractiveDeferredLoadingClientConfig(
		configLoadingRules,
		&clientcmd.ConfigOverrides{
			CurrentContext: contextName,
		},
	)
}

func IsPodRunning(pod *core.Pod) bool {
	return pod.Status.Phase == core.PodRunning
}
07070100000056000081A4000000000000000000000001689B9CB3000016D2000000000000000000000000000000000000002900000000kubeshark-cli-52.8.1/kubernetes/proxy.gopackage kubernetes

import (
	"bytes"
	"context"
	"fmt"
	"net"
	"net/http"
	"net/url"
	"regexp"
	"strings"
	"time"

	"github.com/kubeshark/kubeshark/config"
	"github.com/rs/zerolog/log"
	"k8s.io/apimachinery/pkg/util/httpstream"
	"k8s.io/client-go/tools/portforward"
	"k8s.io/client-go/transport/spdy"
	"k8s.io/kubectl/pkg/proxy"
)

const k8sProxyApiPrefix = "/"
const selfServicePort = 80

func StartProxy(kubernetesProvider *Provider, proxyHost string, srcPort uint16, selfNamespace string, selfServiceName string) (*http.Server, error) {
	log.Info().
		Str("proxy-host", proxyHost).
		Str("namespace", selfNamespace).
		Str("service", selfServiceName).
		Int("src-port", int(srcPort)).
		Msg("Starting proxy...")

	filter := &proxy.FilterServer{
		AcceptPaths:   proxy.MakeRegexpArrayOrDie(proxy.DefaultPathAcceptRE),
		RejectPaths:   proxy.MakeRegexpArrayOrDie(proxy.DefaultPathRejectRE),
		AcceptHosts:   proxy.MakeRegexpArrayOrDie("^.*"),
		RejectMethods: proxy.MakeRegexpArrayOrDie(proxy.DefaultMethodRejectRE),
	}

	proxyHandler, err := proxy.NewProxyHandler(k8sProxyApiPrefix, filter, &kubernetesProvider.clientConfig, time.Second*2, false)
	if err != nil {
		return nil, err
	}
	mux := http.NewServeMux()
	mux.Handle(k8sProxyApiPrefix, getRerouteHttpHandlerSelfAPI(proxyHandler, selfNamespace, selfServiceName))
	mux.Handle("/static/", getRerouteHttpHandlerSelfStatic(proxyHandler, selfNamespace, selfServiceName))

	l, err := net.Listen("tcp", fmt.Sprintf("%s:%d", proxyHost, int(srcPort)))
	if err != nil {
		return nil, err
	}

	server := &http.Server{
		Handler: mux,
	}

	go func() {
		if err := server.Serve(l); err != nil && err != http.ErrServerClosed {
			log.Error().Err(err).Msg("While creating proxy!")
			return
		}
	}()

	return server, nil
}

func getSelfHubProxiedHostAndPath(selfNamespace string, selfServiceName string) string {
	return fmt.Sprintf("/api/v1/namespaces/%s/services/%s:%d/proxy", selfNamespace, selfServiceName, selfServicePort)
}

func GetProxyOnPort(port uint16) string {
	return fmt.Sprintf("http://%s:%d", config.Config.Tap.Proxy.Host, port)
}

func GetHubUrl() string {
	return fmt.Sprintf("%s/api", GetProxyOnPort(config.Config.Tap.Proxy.Front.Port))
}

func getRerouteHttpHandlerSelfAPI(proxyHandler http.Handler, selfNamespace string, selfServiceName string) http.Handler {
	return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
		w.Header().Set("Access-Control-Allow-Origin", "*")
		w.Header().Set("Access-Control-Allow-Credentials", "true")
		w.Header().Set("Access-Control-Allow-Headers", "Content-Type, Content-Length, Accept-Encoding, X-CSRF-Token, Authorization, accept, origin, Cache-Control, X-Requested-With, x-session-token")
		w.Header().Set("Access-Control-Allow-Methods", "POST, OPTIONS, GET, PUT, DELETE")

		if r.Method == "OPTIONS" {
			w.WriteHeader(http.StatusNoContent)
			return
		}

		proxiedPath := getSelfHubProxiedHostAndPath(selfNamespace, selfServiceName)

		//avoid redirecting several times
		if !strings.Contains(r.URL.Path, proxiedPath) {
			r.URL.Path = fmt.Sprintf("%s%s", getSelfHubProxiedHostAndPath(selfNamespace, selfServiceName), r.URL.Path)
		}
		proxyHandler.ServeHTTP(w, r)
	})
}

func getRerouteHttpHandlerSelfStatic(proxyHandler http.Handler, selfNamespace string, selfServiceName string) http.Handler {
	return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
		r.URL.Path = strings.Replace(r.URL.Path, "/static/", fmt.Sprintf("%s/static/", getSelfHubProxiedHostAndPath(selfNamespace, selfServiceName)), 1)
		proxyHandler.ServeHTTP(w, r)
	})
}

func NewPortForward(kubernetesProvider *Provider, namespace string, podRegex *regexp.Regexp, srcPort uint16, dstPort uint16, ctx context.Context) (*portforward.PortForwarder, error) {
	pods, err := kubernetesProvider.ListPodsByAppLabel(ctx, namespace, map[string]string{AppLabelKey: "front"})
	if err != nil {
		return nil, err
	} else if len(pods) == 0 {
		return nil, fmt.Errorf("didn't find pod to port-forward")
	}

	podName := pods[0].Name

	log.Info().
		Str("namespace", namespace).
		Str("pod", podName).
		Int("src-port", int(srcPort)).
		Int("dst-port", int(dstPort)).
		Msg("Starting proxy using port-forward method...")

	dialer, err := getHttpDialer(kubernetesProvider, namespace, podName)
	if err != nil {
		return nil, err
	}

	stopChan, readyChan := make(chan struct{}, 1), make(chan struct{}, 1)
	out, errOut := new(bytes.Buffer), new(bytes.Buffer)

	forwarder, err := portforward.New(dialer, []string{fmt.Sprintf("%d:%d", srcPort, dstPort)}, stopChan, readyChan, out, errOut)
	if err != nil {
		return nil, err
	}

	go func() {
		if err = forwarder.ForwardPorts(); err != nil {
			log.Error().Err(err).Msg("While Kubernetes port-forwarding!")
			log.Info().Str("command", fmt.Sprintf("kubectl port-forward -n %s service/kubeshark-front 8899:80", config.Config.Tap.Release.Namespace)).Msg("Please try running:")
			return
		}
	}()

	return forwarder, nil
}

func getHttpDialer(kubernetesProvider *Provider, namespace string, podName string) (httpstream.Dialer, error) {
	roundTripper, upgrader, err := spdy.RoundTripperFor(&kubernetesProvider.clientConfig)
	if err != nil {
		log.Error().Err(err).Msg("While creating HTTP dialer!")
		return nil, err
	}

	clientConfigHostUrl, err := url.Parse(kubernetesProvider.clientConfig.Host)
	if err != nil {
		return nil, fmt.Errorf("Failed parsing client config host URL %s, error %w", kubernetesProvider.clientConfig.Host, err)
	}
	path := fmt.Sprintf("%s/api/v1/namespaces/%s/pods/%s/portforward", clientConfigHostUrl.Path, namespace, podName)

	serverURL := url.URL{Scheme: "https", Path: path, Host: clientConfigHostUrl.Host}
	log.Debug().
		Str("url", serverURL.String()).
		Msg("HTTP dialer URL:")

	return spdy.NewDialer(upgrader, &http.Client{Transport: roundTripper}, http.MethodPost, &serverURL), nil
}
07070100000057000081A4000000000000000000000001689B9CB300000912000000000000000000000000000000000000002900000000kubeshark-cli-52.8.1/kubernetes/watch.gopackage kubernetes

import (
	"context"
	"errors"
	"fmt"
	"sync"
	"time"

	"github.com/kubeshark/kubeshark/debounce"
	"github.com/rs/zerolog/log"
	"k8s.io/apimachinery/pkg/watch"
)

type EventFilterer interface {
	Filter(*WatchEvent) (bool, error)
}

type WatchCreator interface {
	NewWatcher(ctx context.Context, namespace string) (watch.Interface, error)
}

func FilteredWatch(ctx context.Context, watcherCreator WatchCreator, targetNamespaces []string, filterer EventFilterer) (<-chan *WatchEvent, <-chan error) {
	eventChan := make(chan *WatchEvent)
	errorChan := make(chan error)

	var wg sync.WaitGroup

	for _, targetNamespace := range targetNamespaces {
		wg.Add(1)

		go func(targetNamespace string) {
			defer wg.Done()
			watchRestartDebouncer := debounce.NewDebouncer(1*time.Minute, func() {})

			for {
				watcher, err := watcherCreator.NewWatcher(ctx, targetNamespace)
				if err != nil {
					errorChan <- fmt.Errorf("error in k8s watch: %v", err)
					break
				}

				err = startWatchLoop(ctx, watcher, filterer, eventChan) // blocking
				watcher.Stop()

				select {
				case <-ctx.Done():
					return
				default:
					break
				}

				if err != nil {
					errorChan <- fmt.Errorf("error in k8s watch: %v", err)
					break
				} else {
					if !watchRestartDebouncer.IsOn() {
						if err := watchRestartDebouncer.SetOn(); err != nil {
							log.Error().Err(err).Send()
						}
						log.Warn().Msg("K8s watch channel closed, restarting watcher...")
						time.Sleep(time.Second * 5)
						continue
					} else {
						errorChan <- errors.New("K8s watch unstable, closes frequently")
						break
					}
				}
			}
		}(targetNamespace)
	}

	go func() {
		<-ctx.Done()
		wg.Wait()
		close(eventChan)
		close(errorChan)
	}()

	return eventChan, errorChan
}

func startWatchLoop(ctx context.Context, watcher watch.Interface, filterer EventFilterer, eventChan chan<- *WatchEvent) error {
	resultChan := watcher.ResultChan()
	for {
		select {
		case e, isChannelOpen := <-resultChan:
			if !isChannelOpen {
				return nil
			}

			wEvent := WatchEvent(e)

			if wEvent.Type == watch.Error {
				return wEvent.ToError()
			}

			if pass, err := filterer.Filter(&wEvent); err != nil {
				return err
			} else if !pass {
				continue
			}

			eventChan <- &wEvent
		case <-ctx.Done():
			return nil
		}
	}
}
07070100000058000081A4000000000000000000000001689B9CB300000442000000000000000000000000000000000000002E00000000kubeshark-cli-52.8.1/kubernetes/watchEvent.gopackage kubernetes

import (
	"fmt"
	"reflect"

	corev1 "k8s.io/api/core/v1"
	eventsv1 "k8s.io/api/events/v1"
	apierrors "k8s.io/apimachinery/pkg/api/errors"
	"k8s.io/apimachinery/pkg/watch"
)

const (
	EventAdded    = watch.Added
	EventModified = watch.Modified
	EventDeleted  = watch.Deleted
	EventBookmark = watch.Bookmark
	EventError    = watch.Error
)

type InvalidObjectType struct {
	RequestedType reflect.Type
}

// Implements the error interface
func (iot *InvalidObjectType) Error() string {
	return fmt.Sprintf("Cannot convert event to type %s", iot.RequestedType)
}

type WatchEvent watch.Event

func (we *WatchEvent) ToPod() (*corev1.Pod, error) {
	pod, ok := we.Object.(*corev1.Pod)
	if !ok {
		return nil, &InvalidObjectType{RequestedType: reflect.TypeOf(pod)}
	}

	return pod, nil
}

func (we *WatchEvent) ToEvent() (*eventsv1.Event, error) {
	event, ok := we.Object.(*eventsv1.Event)
	if !ok {
		return nil, &InvalidObjectType{RequestedType: reflect.TypeOf(event)}
	}

	return event, nil
}

func (we *WatchEvent) ToError() error {
	return apierrors.FromObject(we.Object)
}
07070100000059000081A4000000000000000000000001689B9CB30000026D000000000000000000000000000000000000002200000000kubeshark-cli-52.8.1/kubeshark.gopackage main

import (
	"os"
	"strconv"
	"time"

	"github.com/kubeshark/kubeshark/cmd"
	"github.com/rs/zerolog"
	"github.com/rs/zerolog/log"
)

func main() {
	zerolog.SetGlobalLevel(zerolog.InfoLevel)

	// Short caller (file:line)
	zerolog.CallerMarshalFunc = func(pc uintptr, file string, line int) string {
		short := file
		for i := len(file) - 1; i > 0; i-- {
			if file[i] == '/' {
				short = file[i+1:]
				break
			}
		}
		file = short
		return file + ":" + strconv.Itoa(line)
	}

	log.Logger = log.Output(zerolog.ConsoleWriter{Out: os.Stderr, TimeFormat: time.RFC3339}).With().Caller().Logger()
	cmd.Execute()
}
0707010000005A000041ED000000000000000000000002689B9CB300000000000000000000000000000000000000000000001F00000000kubeshark-cli-52.8.1/manifests0707010000005B000081A4000000000000000000000001689B9CB300000211000000000000000000000000000000000000002900000000kubeshark-cli-52.8.1/manifests/README.md# Manifests

## Apply

Clone the repo:

```shell
git clone git@github.com:kubeshark/kubeshark.git --depth 1
cd kubeshark/manifests
```

To apply the manifests, run:

```shell
kubectl apply -f .
```

To clean up:

```shell
kubectl delete namespace kubeshark
kubectl delete clusterrolebinding kubeshark-cluster-role-binding
kubectl delete clusterrole kubeshark-cluster-role
```

## Accessing

Do the port forwarding:

```shell
kubectl port-forward service/kubeshark-front 8899:80
```

Visit [localhost:8899](http://localhost:8899)
0707010000005C000081A4000000000000000000000001689B9CB300006956000000000000000000000000000000000000002D00000000kubeshark-cli-52.8.1/manifests/complete.yaml---
# Source: kubeshark/templates/17-network-policies.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  labels:
    helm.sh/chart: kubeshark-52.8.1
    app.kubernetes.io/name: kubeshark
    app.kubernetes.io/instance: kubeshark
    app.kubernetes.io/version: "52.8.1"
    app.kubernetes.io/managed-by: Helm
  name: kubeshark-hub-network-policy
  namespace: default
spec:
  podSelector:
    matchLabels:
      app.kubeshark.co/app: hub
  policyTypes:
    - Ingress
    - Egress
  ingress:
    - ports:
        - protocol: TCP
          port: 8080
    - ports:
        - protocol: TCP
          port: 9100
  egress:
    - {}
---
# Source: kubeshark/templates/17-network-policies.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  labels:
    helm.sh/chart: kubeshark-52.8.1
    app.kubernetes.io/name: kubeshark
    app.kubernetes.io/instance: kubeshark
    app.kubernetes.io/version: "52.8.1"
    app.kubernetes.io/managed-by: Helm
  annotations:
  name: kubeshark-front-network-policy
  namespace: default
spec:
  podSelector:
    matchLabels:
      app.kubeshark.co/app: front
  policyTypes:
    - Ingress
    - Egress
  ingress:
    - ports:
        - protocol: TCP
          port: 8080
  egress:
    - {}
---
# Source: kubeshark/templates/17-network-policies.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  labels:
    helm.sh/chart: kubeshark-52.8.1
    app.kubernetes.io/name: kubeshark
    app.kubernetes.io/instance: kubeshark
    app.kubernetes.io/version: "52.8.1"
    app.kubernetes.io/managed-by: Helm
  annotations:
  name: kubeshark-dex-network-policy
  namespace: default
spec:
  podSelector:
    matchLabels:
      app.kubeshark.co/app: dex
  policyTypes:
    - Ingress
    - Egress
  ingress:
    - ports:
        - protocol: TCP
          port: 5556
  egress:
    - {}
---
# Source: kubeshark/templates/17-network-policies.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  labels:
    helm.sh/chart: kubeshark-52.8.1
    app.kubernetes.io/name: kubeshark
    app.kubernetes.io/instance: kubeshark
    app.kubernetes.io/version: "52.8.1"
    app.kubernetes.io/managed-by: Helm
  annotations:
  name: kubeshark-worker-network-policy
  namespace: default
spec:
  podSelector:
    matchLabels:
      app.kubeshark.co/app: worker
  policyTypes:
    - Ingress
    - Egress
  ingress:
    - ports:
        - protocol: TCP
          port: 48999
        - protocol: TCP
          port: 49100
  egress:
    - {}
---
# Source: kubeshark/templates/01-service-account.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    helm.sh/chart: kubeshark-52.8.1
    app.kubernetes.io/name: kubeshark
    app.kubernetes.io/instance: kubeshark
    app.kubernetes.io/version: "52.8.1"
    app.kubernetes.io/managed-by: Helm
  name: kubeshark-service-account
  namespace: default
---
# Source: kubeshark/templates/13-secret.yaml
kind: Secret
apiVersion: v1
metadata:
  name: kubeshark-secret
  namespace: default
  labels:
    app.kubeshark.co/app: hub
    helm.sh/chart: kubeshark-52.8.1
    app.kubernetes.io/name: kubeshark
    app.kubernetes.io/instance: kubeshark
    app.kubernetes.io/version: "52.8.1"
    app.kubernetes.io/managed-by: Helm
stringData:
    LICENSE: ''
    SCRIPTING_ENV: '{}'
    OIDC_CLIENT_ID: 'not set'
    OIDC_CLIENT_SECRET: 'not set'
---
# Source: kubeshark/templates/13-secret.yaml
kind: Secret
apiVersion: v1
metadata:
  name: kubeshark-saml-x509-crt-secret
  namespace: default
  labels:
    app.kubeshark.co/app: hub
    helm.sh/chart: kubeshark-52.8.1
    app.kubernetes.io/name: kubeshark
    app.kubernetes.io/instance: kubeshark
    app.kubernetes.io/version: "52.8.1"
    app.kubernetes.io/managed-by: Helm
stringData:
  AUTH_SAML_X509_CRT: |
---
# Source: kubeshark/templates/13-secret.yaml
kind: Secret
apiVersion: v1
metadata:
  name: kubeshark-saml-x509-key-secret
  namespace: default
  labels:
    app.kubeshark.co/app: hub
    helm.sh/chart: kubeshark-52.8.1
    app.kubernetes.io/name: kubeshark
    app.kubernetes.io/instance: kubeshark
    app.kubernetes.io/version: "52.8.1"
    app.kubernetes.io/managed-by: Helm
stringData:
  AUTH_SAML_X509_KEY: |
---
# Source: kubeshark/templates/11-nginx-config-map.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: kubeshark-nginx-config-map
  namespace: default
  labels:
    helm.sh/chart: kubeshark-52.8.1
    app.kubernetes.io/name: kubeshark
    app.kubernetes.io/instance: kubeshark
    app.kubernetes.io/version: "52.8.1"
    app.kubernetes.io/managed-by: Helm
data:
  default.conf: |
    server {
      listen 8080;
      listen [::]:8080;
      access_log /dev/stdout;
      error_log /dev/stdout;

      client_body_buffer_size     64k;
      client_header_buffer_size   32k;
      large_client_header_buffers 8 64k;

      location /api {
        rewrite ^/api(.*)$ $1 break;
        proxy_pass http://kubeshark-hub;
        proxy_set_header   X-Forwarded-For $remote_addr;
        proxy_set_header   Host $http_host;
        proxy_set_header Upgrade websocket;
        proxy_set_header Connection Upgrade;
        proxy_set_header  Authorization $http_authorization;
        proxy_pass_header Authorization;
        proxy_connect_timeout 4s;
        proxy_read_timeout 120s;
        proxy_send_timeout 12s;
        proxy_pass_request_headers      on;
      }

      location /saml {
        rewrite ^/saml(.*)$ /saml$1 break;
        proxy_pass http://kubeshark-hub;
        proxy_set_header   X-Forwarded-For $remote_addr;
        proxy_set_header   Host $http_host;
        proxy_connect_timeout 4s;
        proxy_read_timeout 120s;
        proxy_send_timeout 12s;
        proxy_pass_request_headers on;
      }

      location / {
        root   /usr/share/nginx/html;
        index  index.html index.htm;
        try_files $uri $uri/ /index.html;
        expires -1;
        add_header Cache-Control no-cache;
      }
      error_page   500 502 503 504  /50x.html;
      location = /50x.html {
        root   /usr/share/nginx/html;
      }
    }
---
# Source: kubeshark/templates/12-config-map.yaml
kind: ConfigMap
apiVersion: v1
metadata:
  name: kubeshark-config-map
  namespace: default
  labels:
    app.kubeshark.co/app: hub
    helm.sh/chart: kubeshark-52.8.1
    app.kubernetes.io/name: kubeshark
    app.kubernetes.io/instance: kubeshark
    app.kubernetes.io/version: "52.8.1"
    app.kubernetes.io/managed-by: Helm
data:
    POD_REGEX: '.*'
    NAMESPACES: ''
    EXCLUDED_NAMESPACES: ''
    BPF_OVERRIDE: ''
    STOPPED: 'false'
    SCRIPTING_SCRIPTS: '{}'
    SCRIPTING_ACTIVE_SCRIPTS: ''
    INGRESS_ENABLED: 'false'
    INGRESS_HOST: 'ks.svc.cluster.local'
    PROXY_FRONT_PORT: '8899'
    AUTH_ENABLED: 'true'
    AUTH_TYPE: 'default'
    AUTH_SAML_IDP_METADATA_URL: ''
    AUTH_SAML_ROLE_ATTRIBUTE: 'role'
    AUTH_SAML_ROLES: '{"admin":{"canDownloadPCAP":true,"canStopTrafficCapturing":true,"canUpdateTargetedPods":true,"canUseScripting":true,"filter":"","scriptingPermissions":{"canActivate":true,"canDelete":true,"canSave":true},"showAdminConsoleLink":true}}'
    AUTH_OIDC_ISSUER: 'not set'
    AUTH_OIDC_REFRESH_TOKEN_LIFETIME: '3960h'
    AUTH_OIDC_STATE_PARAM_EXPIRY: '10m'
    AUTH_OIDC_BYPASS_SSL_CA_CHECK: 'false'
    TELEMETRY_DISABLED: 'false'
    SCRIPTING_DISABLED: 'false'
    TARGETED_PODS_UPDATE_DISABLED: ''
    PRESET_FILTERS_CHANGING_ENABLED: 'true'
    RECORDING_DISABLED: ''
    STOP_TRAFFIC_CAPTURING_DISABLED: 'false'
    GLOBAL_FILTER: ""
    DEFAULT_FILTER: "!dns and !error"
    TRAFFIC_SAMPLE_RATE: '100'
    JSON_TTL: '5m'
    PCAP_TTL: '10s'
    PCAP_ERROR_TTL: '60s'
    TIMEZONE: ' '
    CLOUD_LICENSE_ENABLED: 'true'
    AI_ASSISTANT_ENABLED: 'true'
    DUPLICATE_TIMEFRAME: '200ms'
    ENABLED_DISSECTORS: 'amqp,dns,http,icmp,kafka,redis,ws,ldap,radius,diameter'
    CUSTOM_MACROS: '{"https":"tls and (http or http2)"}'
    DISSECTORS_UPDATING_ENABLED: 'true'
    DETECT_DUPLICATES: 'false'
    PCAP_DUMP_ENABLE: 'true'
    PCAP_TIME_INTERVAL: '1m'
    PCAP_MAX_TIME: '1h'
    PCAP_MAX_SIZE: '500MB'
    PORT_MAPPING: '{"amqp":[5671,5672],"diameter":[3868],"http":[80,443,8080],"kafka":[9092],"ldap":[389],"redis":[6379]}'
---
# Source: kubeshark/templates/02-cluster-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    helm.sh/chart: kubeshark-52.8.1
    app.kubernetes.io/name: kubeshark
    app.kubernetes.io/instance: kubeshark
    app.kubernetes.io/version: "52.8.1"
    app.kubernetes.io/managed-by: Helm
  name: kubeshark-cluster-role-default
  namespace: default
rules:
  - apiGroups:
      - ""
      - extensions
      - apps
    resources:
      - nodes
      - pods
      - services
      - endpoints
      - persistentvolumeclaims
    verbs:
      - list
      - get
      - watch
  - apiGroups:
      - ""
    resources:
      - namespaces
    verbs:
      - get
      - list
      - watch
  - apiGroups:
    - networking.k8s.io
    resources:
    - networkpolicies
    verbs:
    - get
    - list
    - watch
    - create
    - update
    - delete
---
# Source: kubeshark/templates/03-cluster-role-binding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    helm.sh/chart: kubeshark-52.8.1
    app.kubernetes.io/name: kubeshark
    app.kubernetes.io/instance: kubeshark
    app.kubernetes.io/version: "52.8.1"
    app.kubernetes.io/managed-by: Helm
  name: kubeshark-cluster-role-binding-default
  namespace: default
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubeshark-cluster-role-default
subjects:
  - kind: ServiceAccount
    name: kubeshark-service-account
    namespace: default
---
# Source: kubeshark/templates/02-cluster-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  labels:
    helm.sh/chart: kubeshark-52.8.1
    app.kubernetes.io/name: kubeshark
    app.kubernetes.io/instance: kubeshark
    app.kubernetes.io/version: "52.8.1"
    app.kubernetes.io/managed-by: Helm
  annotations:
  name: kubeshark-self-config-role
  namespace: default
rules:
  - apiGroups:
      - ""
      - v1
    resourceNames:
      - kubeshark-secret
      - kubeshark-config-map
      - kubeshark-secret-default
      - kubeshark-config-map-default
    resources:
      - secrets
      - configmaps
    verbs:
      - create
      - get
      - watch
      - list
      - update
      - patch
      - delete
  - apiGroups:
      - ""
      - v1
    resources:
      - secrets
      - configmaps
      - pods/log
    verbs:
      - create
      - get
---
# Source: kubeshark/templates/03-cluster-role-binding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    helm.sh/chart: kubeshark-52.8.1
    app.kubernetes.io/name: kubeshark
    app.kubernetes.io/instance: kubeshark
    app.kubernetes.io/version: "52.8.1"
    app.kubernetes.io/managed-by: Helm
  annotations:
  name: kubeshark-self-config-role-binding
  namespace: default
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubeshark-self-config-role
subjects:
  - kind: ServiceAccount
    name: kubeshark-service-account
    namespace: default
---
# Source: kubeshark/templates/05-hub-service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubeshark.co/app: hub
    helm.sh/chart: kubeshark-52.8.1
    app.kubernetes.io/name: kubeshark
    app.kubernetes.io/instance: kubeshark
    app.kubernetes.io/version: "52.8.1"
    app.kubernetes.io/managed-by: Helm
  name: kubeshark-hub
  namespace: default
spec:
  ports:
    - name: kubeshark-hub
      port: 80
      targetPort: 8080
  selector:
    app.kubeshark.co/app: hub
  type: ClusterIP
---
# Source: kubeshark/templates/07-front-service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    helm.sh/chart: kubeshark-52.8.1
    app.kubernetes.io/name: kubeshark
    app.kubernetes.io/instance: kubeshark
    app.kubernetes.io/version: "52.8.1"
    app.kubernetes.io/managed-by: Helm
  name: kubeshark-front
  namespace: default
spec:
  ports:
    - name: kubeshark-front
      port: 80
      targetPort: 8080
  selector:
    app.kubeshark.co/app: front
  type: ClusterIP
---
# Source: kubeshark/templates/15-worker-service-metrics.yaml
kind: Service
apiVersion: v1
metadata:
  labels:
    helm.sh/chart: kubeshark-52.8.1
    app.kubernetes.io/name: kubeshark
    app.kubernetes.io/instance: kubeshark
    app.kubernetes.io/version: "52.8.1"
    app.kubernetes.io/managed-by: Helm
  annotations:
    prometheus.io/scrape: 'true'
    prometheus.io/port: '49100'
  name: kubeshark-worker-metrics
  namespace: default
spec:
  selector:
    app.kubeshark.co/app: worker
    helm.sh/chart: kubeshark-52.8.1
    app.kubernetes.io/name: kubeshark
    app.kubernetes.io/instance: kubeshark
    app.kubernetes.io/version: "52.8.1"
    app.kubernetes.io/managed-by: Helm
  ports:
  - name: metrics
    protocol: TCP
    port: 49100
    targetPort: 49100
---
# Source: kubeshark/templates/16-hub-service-metrics.yaml
kind: Service
apiVersion: v1
metadata:
  labels:
    helm.sh/chart: kubeshark-52.8.1
    app.kubernetes.io/name: kubeshark
    app.kubernetes.io/instance: kubeshark
    app.kubernetes.io/version: "52.8.1"
    app.kubernetes.io/managed-by: Helm
  annotations:
    prometheus.io/scrape: 'true'
    prometheus.io/port: '9100'
  name: kubeshark-hub-metrics
  namespace: default
spec:
  selector:
    app.kubeshark.co/app: hub
    helm.sh/chart: kubeshark-52.8.1
    app.kubernetes.io/name: kubeshark
    app.kubernetes.io/instance: kubeshark
    app.kubernetes.io/version: "52.8.1"
    app.kubernetes.io/managed-by: Helm
  ports:
  - name: metrics
    protocol: TCP
    port: 9100
    targetPort: 9100
---
# Source: kubeshark/templates/09-worker-daemon-set.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    app.kubeshark.co/app: worker
    sidecar.istio.io/inject: "false"
    helm.sh/chart: kubeshark-52.8.1
    app.kubernetes.io/name: kubeshark
    app.kubernetes.io/instance: kubeshark
    app.kubernetes.io/version: "52.8.1"
    app.kubernetes.io/managed-by: Helm
  name: kubeshark-worker-daemon-set
  namespace: default
spec:
  selector:
    matchLabels:
      app.kubeshark.co/app: worker
      app.kubernetes.io/name: kubeshark
      app.kubernetes.io/instance: kubeshark
  template:
    metadata:
      labels:
        app.kubeshark.co/app: worker
        helm.sh/chart: kubeshark-52.8.1
        app.kubernetes.io/name: kubeshark
        app.kubernetes.io/instance: kubeshark
        app.kubernetes.io/version: "52.8.1"
        app.kubernetes.io/managed-by: Helm
      name: kubeshark-worker-daemon-set
      namespace: kubeshark
    spec:
      initContainers:
        - command:
          - /bin/sh
          - -c
          - mkdir -p /sys/fs/bpf && mount | grep -q '/sys/fs/bpf' || mount -t bpf bpf /sys/fs/bpf
          image: 'docker.io/kubeshark/worker:v52.8'
          imagePullPolicy: Always
          name: mount-bpf
          securityContext:
            privileged: true
          volumeMounts:
          - mountPath: /sys
            name: sys
            mountPropagation: Bidirectional
      containers:
        - command:
            - ./worker
            - -i
            - any
            - -port
            - '48999'
            - -metrics-port
            - '49100'
            - -packet-capture
            - 'best'
            - -loglevel
            - 'warning'
            - -servicemesh
            - -procfs
            - /hostproc
            - -resolution-strategy
            - 'auto'
            - -staletimeout
            - '30'
          image: 'docker.io/kubeshark/worker:v52.8'
          imagePullPolicy: Always
          name: sniffer
          ports:
            - containerPort: 49100
              protocol: TCP
              name: metrics
          env:
          - name: POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          - name: TCP_STREAM_CHANNEL_TIMEOUT_MS
            value: '10000'
          - name: TCP_STREAM_CHANNEL_TIMEOUT_SHOW
            value: 'false'
          - name: KUBESHARK_CLOUD_API_URL
            value: 'https://api.kubeshark.co'
          - name: PROFILING_ENABLED
            value: 'false'
          - name: SENTRY_ENABLED
            value: 'false'
          - name: SENTRY_ENVIRONMENT
            value: 'production'
          resources:
            limits:
              
              
              memory: 5Gi
              
            requests:
              
              cpu: 50m
              
              
              memory: 50Mi
              
          securityContext:
            privileged: true
          readinessProbe:
            periodSeconds: 5
            failureThreshold: 3
            successThreshold: 1
            initialDelaySeconds: 5
            tcpSocket:
              port: 48999
          livenessProbe:
            periodSeconds: 5
            failureThreshold: 3
            successThreshold: 1
            initialDelaySeconds: 5
            tcpSocket:
              port: 48999
          volumeMounts:
            - mountPath: /hostproc
              name: proc
              readOnly: true
            - mountPath: /sys
              name: sys
              readOnly: true
              mountPropagation: HostToContainer
            - mountPath: /app/data
              name: data
        - command:
            - ./tracer
            - -procfs
            - /hostproc
            - -disable-tls-log
            - -loglevel
            - 'warning'
          image: 'docker.io/kubeshark/worker:v52.8'
          imagePullPolicy: Always
          name: tracer
          env:
          - name: POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          - name: PROFILING_ENABLED
            value: 'false'
          - name: SENTRY_ENABLED
            value: 'false'
          - name: SENTRY_ENVIRONMENT
            value: 'production'
          resources:
            limits:
              
              
              memory: 5Gi
              
            requests:
              
              cpu: 50m
              
              
              memory: 50Mi
              
          securityContext:
            privileged: true
          volumeMounts:
            - mountPath: /hostproc
              name: proc
              readOnly: true
            - mountPath: /sys
              name: sys
              readOnly: true
              mountPropagation: HostToContainer
            - mountPath: /app/data
              name: data
            - mountPath: /etc/os-release
              name: os-release
              readOnly: true
            - mountPath: /hostroot
              mountPropagation: HostToContainer
              name: root
              readOnly: true
      dnsPolicy: ClusterFirstWithHostNet
      hostNetwork: true
      serviceAccountName: kubeshark-service-account
      tolerations:
        - key: 
          operator: "Exists"
          effect: "NoExecute"
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      volumes:
        - hostPath:
            path: /proc
          name: proc
        - hostPath:
            path: /sys
          name: sys
        - name: lib-modules
          hostPath:
            path: /lib/modules
        - hostPath:
            path: /etc/os-release
          name: os-release
        - hostPath:
            path: /
          name: root
        - name: data
          emptyDir:
            sizeLimit: 5Gi
---
# Source: kubeshark/templates/04-hub-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app.kubeshark.co/app: hub
    helm.sh/chart: kubeshark-52.8.1
    app.kubernetes.io/name: kubeshark
    app.kubernetes.io/instance: kubeshark
    app.kubernetes.io/version: "52.8.1"
    app.kubernetes.io/managed-by: Helm
  name: kubeshark-hub
  namespace: default
spec:
  replicas: 1  # Set the desired number of replicas
  selector:
    matchLabels:
      app.kubeshark.co/app: hub
      app.kubernetes.io/name: kubeshark
      app.kubernetes.io/instance: kubeshark
  template:
    metadata:
      labels:
        app.kubeshark.co/app: hub
        helm.sh/chart: kubeshark-52.8.1
        app.kubernetes.io/name: kubeshark
        app.kubernetes.io/instance: kubeshark
        app.kubernetes.io/version: "52.8.1"
        app.kubernetes.io/managed-by: Helm
    spec:
      dnsPolicy: ClusterFirstWithHostNet
      serviceAccountName: kubeshark-service-account
      containers:
        - name: hub
          command:
            - ./hub
            - -port
            - "8080"
            - -loglevel
            - 'warning'
            - -capture-stop-after
            - "5m"
          env:
          - name: POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          - name: SENTRY_ENABLED
            value: 'false'
          - name: SENTRY_ENVIRONMENT
            value: 'production'
          - name: KUBESHARK_CLOUD_API_URL
            value: 'https://api.kubeshark.co'
          - name: PROFILING_ENABLED
            value: 'false'
          image: 'docker.io/kubeshark/hub:v52.8'
          imagePullPolicy: Always
          readinessProbe:
            periodSeconds: 5
            failureThreshold: 3
            successThreshold: 1
            initialDelaySeconds: 5
            tcpSocket:
              port: 8080
          livenessProbe:
            periodSeconds: 5
            failureThreshold: 3
            successThreshold: 1
            initialDelaySeconds: 5
            tcpSocket:
              port: 8080
          resources:
            limits:
              
              
              memory: 5Gi
              
            requests:
              
              cpu: 50m
              
              
              memory: 50Mi
              
          volumeMounts:
          - name: saml-x509-volume
            mountPath: "/etc/saml/x509"
            readOnly: true
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      volumes:
      - name: saml-x509-volume
        projected:
          sources:
          - secret:
              name: kubeshark-saml-x509-crt-secret
              items:
              - key: AUTH_SAML_X509_CRT
                path: kubeshark.crt
          - secret:
              name: kubeshark-saml-x509-key-secret
              items:
              - key: AUTH_SAML_X509_KEY
                path: kubeshark.key
---
# Source: kubeshark/templates/06-front-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app.kubeshark.co/app: front
    helm.sh/chart: kubeshark-52.8.1
    app.kubernetes.io/name: kubeshark
    app.kubernetes.io/instance: kubeshark
    app.kubernetes.io/version: "52.8.1"
    app.kubernetes.io/managed-by: Helm
  name: kubeshark-front
  namespace: default
spec:
  replicas: 1  # Set the desired number of replicas
  selector:
    matchLabels:
      app.kubeshark.co/app: front
      app.kubernetes.io/name: kubeshark
      app.kubernetes.io/instance: kubeshark
  template:
    metadata:
      labels:
        app.kubeshark.co/app: front
        helm.sh/chart: kubeshark-52.8.1
        app.kubernetes.io/name: kubeshark
        app.kubernetes.io/instance: kubeshark
        app.kubernetes.io/version: "52.8.1"
        app.kubernetes.io/managed-by: Helm
    spec:
      containers:
        - env:
            - name: REACT_APP_AUTH_ENABLED
              value: 'true'
            - name: REACT_APP_AUTH_TYPE
              value: 'default'
            - name: REACT_APP_COMPLETE_STREAMING_ENABLED
              value: 'true'
            - name: REACT_APP_AUTH_SAML_IDP_METADATA_URL
              value: ' '
            - name: REACT_APP_TIMEZONE
              value: ' '
            - name: REACT_APP_SCRIPTING_DISABLED
              value: 'false'
            - name: REACT_APP_TARGETED_PODS_UPDATE_DISABLED
              value: 'false'
            - name: REACT_APP_PRESET_FILTERS_CHANGING_ENABLED
              value: 'true'
            - name: REACT_APP_BPF_OVERRIDE_DISABLED
              value: 'true'
            - name: REACT_APP_RECORDING_DISABLED
              value: 'false'
            - name: REACT_APP_STOP_TRAFFIC_CAPTURING_DISABLED
              value: 'false'
            - name: 'REACT_APP_CLOUD_LICENSE_ENABLED'
              value: 'true'
            - name: 'REACT_APP_AI_ASSISTANT_ENABLED'
              value: 'true'
            - name: REACT_APP_SUPPORT_CHAT_ENABLED
              value: 'true'
            - name: REACT_APP_BETA_ENABLED
              value: 'false'
            - name: REACT_APP_DISSECTORS_UPDATING_ENABLED
              value: 'true'
            - name: REACT_APP_SENTRY_ENABLED
              value: 'false'
            - name: REACT_APP_SENTRY_ENVIRONMENT
              value: 'production'
          image: 'docker.io/kubeshark/front:v52.8'
          imagePullPolicy: Always
          name: kubeshark-front
          livenessProbe:
            periodSeconds: 1
            failureThreshold: 3
            successThreshold: 1
            initialDelaySeconds: 3
            tcpSocket:
              port: 8080
          readinessProbe:
            periodSeconds: 1
            failureThreshold: 3
            successThreshold: 1
            initialDelaySeconds: 3
            tcpSocket:
              port: 8080
            timeoutSeconds: 1
          resources:
            limits:
              cpu: 750m
              memory: 1Gi
            requests:
              cpu: 50m
              memory: 50Mi
          volumeMounts:
            - name: nginx-config
              mountPath: /etc/nginx/conf.d/default.conf
              subPath: default.conf
              readOnly: true
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      volumes:
        - name: nginx-config
          configMap:
            name: kubeshark-nginx-config-map
      dnsPolicy: ClusterFirstWithHostNet
      serviceAccountName: kubeshark-service-account
0707010000005D000041ED000000000000000000000002689B9CB300000000000000000000000000000000000000000000002A00000000kubeshark-cli-52.8.1/manifests/prometheus0707010000005E000081A4000000000000000000000001689B9CB30000036F000000000000000000000000000000000000004500000000kubeshark-cli-52.8.1/manifests/prometheus/kube_prometheus_stack.yamlgrafana:
  additionalDataSources: []
prometheus:
  prometheusSpec:
    scrapeInterval: 10s
    evaluationInterval: 30s
    additionalScrapeConfigs: |
      - job_name: 'kubeshark-worker-metrics'
        kubernetes_sd_configs:
          - role: endpoints
        relabel_configs:
          - source_labels: [__meta_kubernetes_pod_name]
            target_label: pod
          - source_labels: [__meta_kubernetes_pod_node_name]
            target_label: node
          - source_labels: [__meta_kubernetes_endpoint_port_name]
            action: keep
            regex: ^metrics$
          - source_labels: [__address__, __meta_kubernetes_endpoint_port_number]
            action: replace
            regex: ([^:]+)(?::\d+)?
            replacement: $1:49100
            target_label: __address__
          - action: labelmap
            regex: __meta_kubernetes_service_label_(.+)
0707010000005F000041ED000000000000000000000002689B9CB300000000000000000000000000000000000000000000002300000000kubeshark-cli-52.8.1/manifests/tls07070100000060000081A4000000000000000000000001689B9CB3000000EE000000000000000000000000000000000000003400000000kubeshark-cli-52.8.1/manifests/tls/certificate.yamlapiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: kubeshark-tls
  namespace: default
spec:
  issuerRef:
    name: letsencrypt-prod
    kind: ClusterIssuer
  secretName: cert-kubeshark
  dnsNames:
  - ks.svc.cluster.local
07070100000061000081A4000000000000000000000001689B9CB30000014B000000000000000000000000000000000000003700000000kubeshark-cli-52.8.1/manifests/tls/cluster-issuer.yamlapiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: info@kubeshark.co
    privateKeySecretRef:
      name: letsencrypt-prod-key
    solvers:
    - http01:
        ingress:
          class: kubeshark-ingress-class
07070100000062000081ED000000000000000000000001689B9CB3000001C1000000000000000000000000000000000000002A00000000kubeshark-cli-52.8.1/manifests/tls/run.sh#!/bin/bash

__dir="$(cd -P -- "$(dirname -- "$0")" && pwd -P)"

helm repo add jetstack https://charts.jetstack.io
helm repo update
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.9.1/cert-manager.crds.yaml
helm install \
cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--version v1.9.1

kubectl apply -f ${__dir}/cluster-issuer.yaml
kubectl apply -f ${__dir}/certificate.yaml
07070100000063000041ED000000000000000000000002689B9CB300000000000000000000000000000000000000000000001A00000000kubeshark-cli-52.8.1/misc07070100000064000081A4000000000000000000000001689B9CB3000002B3000000000000000000000000000000000000002400000000kubeshark-cli-52.8.1/misc/consts.gopackage misc

import (
	"fmt"
	"os"
	"path"
)

var (
	Software       = "Kubeshark"
	Program        = "kubeshark"
	Description    = "The API Traffic Analyzer for Kubernetes"
	Website        = "https://kubeshark.co"
	Email          = "info@kubeshark.co"
	Ver            = "0.0.0"
	Branch         = "master"
	GitCommitHash  = "" // this var is overridden using ldflags in makefile when building
	BuildTimestamp = "" // this var is overridden using ldflags in makefile when building
	RBACVersion    = "v1"
	Platform       = ""
)

func GetDotFolderPath() string {
	home, homeDirErr := os.UserHomeDir()
	if homeDirErr != nil {
		return ""
	}
	return path.Join(home, fmt.Sprintf(".%s", Program))
}
07070100000065000041ED000000000000000000000002689B9CB300000000000000000000000000000000000000000000002200000000kubeshark-cli-52.8.1/misc/fsUtils07070100000066000081A4000000000000000000000001689B9CB30000019C000000000000000000000000000000000000002E00000000kubeshark-cli-52.8.1/misc/fsUtils/dirUtils.gopackage fsUtils

import (
	"fmt"
	"os"
)

func EnsureDir(dirName string) error {
	err := os.Mkdir(dirName, 0700)
	if err == nil {
		return nil
	}
	if os.IsExist(err) {
		// check that the existing path is a directory
		info, err := os.Stat(dirName)
		if err != nil {
			return err
		}
		if !info.IsDir() {
			return fmt.Errorf("path exists but is not a directory: %s", dirName)
		}
		return nil
	}
	return err
}
07070100000067000081A4000000000000000000000001689B9CB300000153000000000000000000000000000000000000002F00000000kubeshark-cli-52.8.1/misc/fsUtils/globUtils.gopackage fsUtils

import (
	"fmt"
	"os"
	"path/filepath"
)

func RemoveFilesByExtension(dirPath string, ext string) error {
	files, err := filepath.Glob(filepath.Join(dirPath, fmt.Sprintf("/*.%s", ext)))
	if err != nil {
		return err
	}

	for _, f := range files {
		if err := os.Remove(f); err != nil {
			return err
		}
	}

	return nil
}
07070100000068000081A4000000000000000000000001689B9CB300000A2A000000000000000000000000000000000000003800000000kubeshark-cli-52.8.1/misc/fsUtils/kubesharkLogsUtils.gopackage fsUtils

import (
	"archive/zip"
	"context"
	"fmt"
	"os"
	"regexp"

	"github.com/kubeshark/kubeshark/config"
	"github.com/kubeshark/kubeshark/kubernetes"
	"github.com/kubeshark/kubeshark/misc"
	"github.com/rs/zerolog/log"
)

func DumpLogs(ctx context.Context, provider *kubernetes.Provider, filePath string, grep string) error {
	podExactRegex := regexp.MustCompile("^" + kubernetes.SELF_RESOURCES_PREFIX)
	pods, err := provider.ListAllPodsMatchingRegex(ctx, podExactRegex, []string{config.Config.Tap.Release.Namespace})
	if err != nil {
		return err
	}

	if len(pods) == 0 {
		return fmt.Errorf("No %s pods found in namespace %s", misc.Software, config.Config.Tap.Release.Namespace)
	}

	newZipFile, err := os.Create(filePath)
	if err != nil {
		return err
	}
	defer newZipFile.Close()
	zipWriter := zip.NewWriter(newZipFile)
	defer zipWriter.Close()

	for _, pod := range pods {
		for _, container := range pod.Spec.Containers {
			logs, err := provider.GetPodLogs(ctx, pod.Namespace, pod.Name, container.Name, grep)
			if err != nil {
				log.Error().Err(err).Msg("Failed to get logs!")
				continue
			} else {
				log.Debug().
					Int("length", len(logs)).
					Str("namespace", pod.Namespace).
					Str("pod", pod.Name).
					Str("container", container.Name).
					Msg("Successfully read log length.")
			}

			if err := AddStrToZip(zipWriter, logs, fmt.Sprintf("%s.%s.%s.log", pod.Namespace, pod.Name, container.Name)); err != nil {
				log.Error().Err(err).Msg("Failed write logs!")
			} else {
				log.Debug().
					Int("length", len(logs)).
					Str("namespace", pod.Namespace).
					Str("pod", pod.Name).
					Str("container", container.Name).
					Msg("Successfully added log length.")
			}
		}
	}

	events, err := provider.GetNamespaceEvents(ctx, config.Config.Tap.Release.Namespace)
	if err != nil {
		log.Error().Err(err).Msg("Failed to get k8b events!")
	} else {
		log.Debug().Str("namespace", config.Config.Tap.Release.Namespace).Msg("Successfully read events.")
	}

	if err := AddStrToZip(zipWriter, events, fmt.Sprintf("%s_events.log", config.Config.Tap.Release.Namespace)); err != nil {
		log.Error().Err(err).Msg("Failed write logs!")
	} else {
		log.Debug().Str("namespace", config.Config.Tap.Release.Namespace).Msg("Successfully added events.")
	}

	if err := AddFileToZip(zipWriter, config.ConfigFilePath); err != nil {
		log.Error().Err(err).Msg("Failed write file!")
	} else {
		log.Debug().Str("file-path", config.ConfigFilePath).Msg("Successfully added file.")
	}

	log.Info().Str("path", filePath).Msg("You can find the ZIP file with all logs at:")
	return nil
}
07070100000069000081A4000000000000000000000001689B9CB300000ABA000000000000000000000000000000000000002E00000000kubeshark-cli-52.8.1/misc/fsUtils/zipUtils.gopackage fsUtils

import (
	"archive/zip"
	"fmt"
	"io"
	"os"
	"path/filepath"
	"strings"

	"github.com/rs/zerolog/log"
)

func AddFileToZip(zipWriter *zip.Writer, filename string) error {

	fileToZip, err := os.Open(filename)
	if err != nil {
		return fmt.Errorf("failed to open file %s, %w", filename, err)
	}
	defer fileToZip.Close()

	// Get the file information
	info, err := fileToZip.Stat()
	if err != nil {
		return fmt.Errorf("failed to get file information %s, %w", filename, err)
	}

	header, err := zip.FileInfoHeader(info)
	if err != nil {
		return err
	}

	// Using FileInfoHeader() above only uses the basename of the file. If we want
	// to preserve the folder structure we can overwrite this with the full path.
	header.Name = filepath.Base(filename)

	// Change to deflate to gain better compression
	// see http://golang.org/pkg/archive/zip/#pkg-constants
	header.Method = zip.Deflate

	writer, err := zipWriter.CreateHeader(header)
	if err != nil {
		return fmt.Errorf("failed to create header in zip for %s, %w", filename, err)
	}
	_, err = io.Copy(writer, fileToZip)
	return err
}

func AddStrToZip(writer *zip.Writer, logs string, fileName string) error {
	if zipFile, err := writer.Create(fileName); err != nil {
		return fmt.Errorf("couldn't create a log file inside zip for %s, %w", fileName, err)
	} else {
		if _, err = zipFile.Write([]byte(logs)); err != nil {
			return fmt.Errorf("couldn't write logs to zip file: %s, %w", fileName, err)
		}
	}
	return nil
}

func Unzip(reader *zip.Reader, dest string) error {
	dest, _ = filepath.Abs(dest)
	_ = os.MkdirAll(dest, os.ModePerm)

	// Closure to address file descriptors issue with all the deferred .Close() methods
	extractAndWriteFile := func(f *zip.File) error {
		rc, err := f.Open()
		if err != nil {
			return err
		}
		defer func() {
			if err := rc.Close(); err != nil {
				panic(err)
			}
		}()

		path := filepath.Join(dest, f.Name)

		// Check for ZipSlip (Directory traversal)
		if !strings.HasPrefix(path, filepath.Clean(dest)+string(os.PathSeparator)) {
			return fmt.Errorf("illegal file path: %s", path)
		}

		if f.FileInfo().IsDir() {
			_ = os.MkdirAll(path, f.Mode())
		} else {
			_ = os.MkdirAll(filepath.Dir(path), f.Mode())
			log.Info().Str("path", path).Msg("Writing HAR file...")
			f, err := os.OpenFile(path, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, f.Mode())
			if err != nil {
				return err
			}
			defer func() {
				if err := f.Close(); err != nil {
					panic(err)
				}
				log.Info().Str("path", path).Msg("HAR file at:")
			}()

			_, err = io.Copy(f, rc)
			if err != nil {
				return err
			}
		}
		return nil
	}

	for _, f := range reader.File {
		err := extractAndWriteFile(f)
		if err != nil {
			return err
		}
	}

	return nil
}
0707010000006A000081A4000000000000000000000001689B9CB3000004F9000000000000000000000000000000000000002700000000kubeshark-cli-52.8.1/misc/scripting.gopackage misc

import (
	"os"
	"path/filepath"

	"github.com/robertkrimen/otto/ast"
	"github.com/robertkrimen/otto/file"
	"github.com/robertkrimen/otto/parser"
)

type Script struct {
	Path   string `json:"path"`
	Title  string `json:"title"`
	Code   string `json:"code"`
	Active bool   `json:"active"`
}

type ConfigMapScript struct {
	Title  string `json:"title"`
	Code   string `json:"code"`
	Active bool   `json:"active"`
}

func (s *Script) ConfigMap() ConfigMapScript {
	return ConfigMapScript{
		Title:  s.Title,
		Code:   s.Code,
		Active: s.Active,
	}
}

func ReadScriptFile(path string) (script *Script, err error) {
	filename := filepath.Base(path)
	var body []byte
	body, err = os.ReadFile(path)
	if err != nil {
		return
	}
	content := string(body)

	var program *ast.Program
	program, err = parser.ParseFile(nil, filename, content, parser.StoreComments)
	if err != nil {
		return
	}

	var title string
	var titleIsSet bool
	code := content

	var idx0 file.Idx
	for node, comments := range program.Comments {
		if (titleIsSet && node.Idx0() > idx0) || len(comments) == 0 {
			continue
		}

		idx0 = node.Idx0()
		title = comments[0].Text
		titleIsSet = true
	}

	script = &Script{
		Path:   path,
		Title:  title,
		Code:   code,
		Active: false,
	}

	return
}
0707010000006B000041ED000000000000000000000002689B9CB300000000000000000000000000000000000000000000002200000000kubeshark-cli-52.8.1/misc/version0707010000006C000081A4000000000000000000000001689B9CB3000005B8000000000000000000000000000000000000003200000000kubeshark-cli-52.8.1/misc/version/versionCheck.gopackage version

import (
	"context"
	"fmt"
	"os"
	"runtime"
	"strings"
	"time"

	"github.com/kubeshark/kubeshark/misc"
	"github.com/kubeshark/kubeshark/utils"
	"github.com/rs/zerolog/log"

	"github.com/google/go-github/v37/github"
)

func CheckNewerVersion() {
	if os.Getenv(fmt.Sprintf("%s_DISABLE_VERSION_CHECK", strings.ToUpper(misc.Program))) != "" {
		return
	}

	log.Info().Msg("Checking for a newer version...")
	start := time.Now()
	client := github.NewClient(nil)
	latestRelease, _, err := client.Repositories.GetLatestRelease(context.Background(), misc.Program, misc.Program)
	if err != nil {
		log.Error().Msg("Failed to get the latest release.")
		return
	}

	latestVersion := *latestRelease.TagName

	log.Debug().
		Str("upstream-version", latestVersion).
		Str("local-version", misc.Ver).
		Dur("elapsed-time", time.Since(start)).
		Msg("Fetched the latest release:")

	if misc.Ver != latestVersion {
		var downloadCommand string
		if runtime.GOOS == "windows" {
			downloadCommand = fmt.Sprintf("curl -LO %v/%s.exe", strings.Replace(*latestRelease.HTMLURL, "tag", "download", 1), misc.Program)
		} else {
			downloadCommand = fmt.Sprintf("sh <(curl -Ls %s/install)", misc.Website)
		}
		msg := fmt.Sprintf("There is a new release! %v -> %v Please upgrade to the latest release, as new releases are not always backward compatible. Run:", misc.Ver, latestVersion)
		log.Warn().Str("command", downloadCommand).Msg(fmt.Sprintf(utils.Yellow, msg))
	}
}
0707010000006D000041ED000000000000000000000002689B9CB300000000000000000000000000000000000000000000001C00000000kubeshark-cli-52.8.1/semver0707010000006E000081A4000000000000000000000001689B9CB3000003C4000000000000000000000000000000000000002600000000kubeshark-cli-52.8.1/semver/semver.gopackage semver

import (
	"regexp"
)

type SemVersion string

func (v SemVersion) IsValid() bool {
	re := regexp.MustCompile(`\d+`)
	breakdown := re.FindAllString(string(v), 3)

	return len(breakdown) == 3
}

func (v SemVersion) Breakdown() (string, string, string) {
	re := regexp.MustCompile(`\d+`)
	breakdown := re.FindAllString(string(v), 3)

	return breakdown[0], breakdown[1], breakdown[2]
}

func (v SemVersion) Major() string {
	major, _, _ := v.Breakdown()
	return major
}

func (v SemVersion) Minor() string {
	_, minor, _ := v.Breakdown()
	return minor
}

func (v SemVersion) Patch() string {
	_, _, patch := v.Breakdown()
	return patch
}

func (v SemVersion) GreaterThan(v2 SemVersion) bool {
	if v.Major() > v2.Major() {
		return true
	} else if v.Major() < v2.Major() {
		return false
	}

	if v.Minor() > v2.Minor() {
		return true
	} else if v.Minor() < v2.Minor() {
		return false
	}

	if v.Patch() > v2.Patch() {
		return true
	}

	return false
}
0707010000006F000041ED000000000000000000000002689B9CB300000000000000000000000000000000000000000000001B00000000kubeshark-cli-52.8.1/utils07070100000070000081A4000000000000000000000001689B9CB3000001FD000000000000000000000000000000000000002600000000kubeshark-cli-52.8.1/utils/browser.gopackage utils

import (
	"fmt"
	"os/exec"
	"runtime"

	"github.com/rs/zerolog/log"
)

func OpenBrowser(url string) {
	var err error

	switch runtime.GOOS {
	case "linux":
		err = exec.Command("xdg-open", url).Start()
	case "windows":
		err = exec.Command("rundll32", "url.dll,FileProtocolHandler", url).Start()
	case "darwin":
		err = exec.Command("open", url).Start()
	default:
		err = fmt.Errorf("unsupported platform")
	}

	if err != nil {
		log.Error().Err(err).Msg("While trying to open a browser")
	}
}
07070100000071000081A4000000000000000000000001689B9CB300000121000000000000000000000000000000000000002500000000kubeshark-cli-52.8.1/utils/colors.gopackage utils

const (
	Black   = "\033[1;30m%s\033[0m"
	Red     = "\033[1;31m%s\033[0m"
	Green   = "\033[1;32m%s\033[0m"
	Yellow  = "\033[1;33m%s\033[0m"
	Blue    = "\033[1;34m%s\033[0m"
	Magenta = "\033[1;35m%s\033[0m"
	Cyan    = "\033[1;36m%s\033[0m"
	White   = "\033[1;37m%s\033[0m"
)
07070100000072000081A4000000000000000000000001689B9CB300000836000000000000000000000000000000000000002300000000kubeshark-cli-52.8.1/utils/http.gopackage utils

import (
	"bytes"
	"fmt"
	"io"
	"net/http"
	"strings"
)

const (
	X_KUBESHARK_CAPTURE_HEADER_KEY          = "X-Kubeshark-Capture"
	X_KUBESHARK_CAPTURE_HEADER_IGNORE_VALUE = "ignore"
)

// Get - When err is nil, resp always contains a non-nil resp.Body.
// Caller should close resp.Body when done reading from it.
func Get(url string, client *http.Client) (*http.Response, error) {
	req, err := http.NewRequest(http.MethodGet, url, nil)
	if err != nil {
		return nil, err
	}
	AddIgnoreCaptureHeader(req)

	return checkError(client.Do(req))
}

// Post - When err is nil, resp always contains a non-nil resp.Body.
// Caller should close resp.Body when done reading from it.
func Post(url, contentType string, body io.Reader, client *http.Client, licenseKey string) (*http.Response, error) {
	req, err := http.NewRequest(http.MethodPost, url, body)
	if err != nil {
		return nil, err
	}
	AddIgnoreCaptureHeader(req)
	req.Header.Set("Content-Type", "application/json")
	req.Header.Set("License-Key", licenseKey)

	return checkError(client.Do(req))
}

// Do - When err is nil, resp always contains a non-nil resp.Body.
// Caller should close resp.Body when done reading from it.
func Do(req *http.Request, client *http.Client) (*http.Response, error) {
	return checkError(client.Do(req))
}

func checkError(response *http.Response, errInOperation error) (*http.Response, error) {
	if errInOperation != nil {
		return response, errInOperation
		// Check only if status != 200 (and not status >= 300). Hub return only 200 on success.
	} else if response.StatusCode != http.StatusOK {
		body, err := io.ReadAll(response.Body)
		response.Body.Close()
		response.Body = io.NopCloser(bytes.NewBuffer(body)) // rewind
		if err != nil {
			return response, err
		}

		errorMsg := strings.ReplaceAll(string(body), "\n", ";")
		return response, fmt.Errorf("got response with status code: %d, body: %s", response.StatusCode, errorMsg)
	}

	return response, nil
}

func AddIgnoreCaptureHeader(req *http.Request) {
	req.Header.Set(X_KUBESHARK_CAPTURE_HEADER_KEY, X_KUBESHARK_CAPTURE_HEADER_IGNORE_VALUE)
}
07070100000073000081A4000000000000000000000001689B9CB300000123000000000000000000000000000000000000002300000000kubeshark-cli-52.8.1/utils/json.gopackage utils

import (
	"strconv"
	"strings"

	"github.com/rs/zerolog/log"
)

func UnescapeUnicodeCharacters(raw string) string {
	str, err := strconv.Unquote(strings.Replace(strconv.Quote(raw), `\\u`, `\u`, -1))
	if err != nil {
		log.Error().Err(err).Send()
		return raw
	}
	return str
}
07070100000074000081A4000000000000000000000001689B9CB30000012F000000000000000000000000000000000000002500000000kubeshark-cli-52.8.1/utils/pretty.gopackage utils

import (
	"bytes"

	"github.com/goccy/go-yaml"
)

func PrettyYaml(data interface{}) (result string, err error) {
	buffer := new(bytes.Buffer)
	encoder := yaml.NewEncoder(buffer, yaml.Indent(2))

	err = encoder.Encode(data)
	if err != nil {
		return
	}
	result = buffer.String()
	return
}
07070100000075000081A4000000000000000000000001689B9CB3000003A8000000000000000000000000000000000000002400000000kubeshark-cli-52.8.1/utils/slice.gopackage utils

func Contains(slice []string, containsValue string) bool {
	for _, sliceValue := range slice {
		if sliceValue == containsValue {
			return true
		}
	}

	return false
}

func Unique(slice []string) []string {
	keys := make(map[string]bool)
	var list []string

	for _, entry := range slice {
		if _, value := keys[entry]; !value {
			keys[entry] = true
			list = append(list, entry)
		}
	}

	return list
}

func EqualStringSlices(slice1 []string, slice2 []string) bool {
	if len(slice1) != len(slice2) {
		return false
	}

	for _, v := range slice1 {
		if !Contains(slice2, v) {
			return false
		}
	}

	return true
}

// Diff returns the elements in `a` that aren't in `b`.
func Diff(a, b []string) []string {
	mb := make(map[string]struct{}, len(b))
	for _, x := range b {
		mb[x] = struct{}{}
	}
	var diff []string
	for _, x := range a {
		if _, found := mb[x]; !found {
			diff = append(diff, x)
		}
	}
	return diff
}
07070100000076000081A4000000000000000000000001689B9CB30000023B000000000000000000000000000000000000002300000000kubeshark-cli-52.8.1/utils/wait.gopackage utils

import (
	"context"
	"os"
	"os/signal"
	"syscall"

	"github.com/rs/zerolog/log"
)

func WaitForTermination(ctx context.Context, cancel context.CancelFunc) {
	log.Debug().Msg("Waiting to finish...")
	sigChan := make(chan os.Signal, 1)
	signal.Notify(sigChan, syscall.SIGINT, syscall.SIGTERM, syscall.SIGQUIT)

	// block until ctx cancel is called or termination signal is received
	select {
	case <-ctx.Done():
		log.Debug().Msg("Context done.")
		break
	case <-sigChan:
		log.Debug().Msg("Got a termination signal, canceling execution...")
		cancel()
	}
}
07070100000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000B00000000TRAILER!!!842 blocks
openSUSE Build Service is sponsored by