You will need an AWS account, a kubernetes cluster (running locally is fine) and python
This assumes a target docker architecture of linux/arm64 (typically for OS X)
CGO_ENABLED=0 GOOS=linux GOARCH=arm64 go build -o bin/controller ./cmd/controller/docker build -f Dockerfile.local -t lambda-controller:local .
Create a temporary credentials file in ~/.aws.temporary.credss that you don't mind using with ack-controller
[temp-profile]
aws_access_key_id=<access key>
aws_secret_access_key=<secret key>
aws_session_token=<session token>
kubectl create namespace ack-system- `kubectl create secret generic aws-credentials --from-file=credentials=$HOME/.aws.temporary.creds -n ack-system
-
helm install ack-lambda-controller ./helm \ --namespace ack-system \ --set image.repository=lambda-controller \ --set image.tag=local \ --set aws.region=ap-southeast-2 \ --set aws.credentials.secretName=aws-credentials \ --set aws.credentials.secretKey=credentials \ --set aws.credentials.profile=temp-profile \ --set installScope=cluster \ --set leaderElection.enabled=false
In test/e2e:
- Create a virtual environment
python -m venv venv - Activate the virtual environment
source venv/bin/activate - Install testing requirements
pip install -r requirements.txt
In test/e2e:
- Run
AWS_PROFILE=my-profile ./setup.sh --pickle(or runsetup.shand thenpickle.sh)
In test/e2e/resources/lambda_function:
AWS_PROFILE=my-profile aws ecr create-repository --repository-name ack-e2e-testing-lambda-controller- Run
AWS_PROFILE=my-profile make -
repo=$(AWS_PROFILE=my-profile aws ecr describe-repositories --repository-names ack-e2e-testing-lambda-controller --query 'repositories[].repositoryUri' --output text) AWS_PROFILE=my-profile aws ecr get-login-password | docker login --password-stdin -u "${repo%/*}" docker push ${repo}:v1 - Run
zip main.zip main.pyandzip updated_main.zip updated_main.py
In test/e2e:
- Run
AWS_PROFILE=my-profile pytest
In test/e2e:
- Run
AWS_PROFILE=my-profile ./teardown.sh