Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Magalix Blog: What Is A Service Mesh? - Mar 10, 2020
[FR] Metanext > Service Mesh sur Kubernetes - May 30, 2020
MetalLB is a load-balancer implementation for bare metal Kubernetes clusters, using standard routing protocols.
Ingresses and Load Balancers in Kubernetes with MetalLB and nginx-ingress by Adatlas - September 8, 2022
→ ,
An all-in-one test automation solution.
Puppeteer is a Node library which provides a high-level API to control Chrome or Chromium over the DevTools Protocol. Puppeteer runs headless by default, but can be configured to run full (non-headless) Chrome or Chromium.
An open source load testing tool. Define user behaviour with Python code, and swarm your system with millions of simultaneous users.
Declarative continuous delivery with a fully-loaded UI
What's New in Argo CD 2.6 | StruggleOps Stream Highlights - February 7, 2023
Dapr helps developers build event-driven, resilient distributed applications. Whether on-premises, in the cloud, or on an edge device, Dapr helps you tackle the challenges that come with building microservices and keeps your code platform agnostic.
→ dapr.io
Dapr joins CNCF Incubator - November 3, 2021
Envoy is an open source edge and service proxy, designed for cloud-native applications
Kong: Service Mesh 101: The Role of Envoy - August 26, 2021
Redis is an open source (BSD licensed), in-memory data structure store, used as a database, cache, and message broker
→ redis.io
→ evergreen-ci/evergreen, evergreen.mongodb.com
Testing Linearizability with Jepsen and Evergreen: “Call Me Continuously!” - February 16, 2017
Evergreen Continuous Integration: Why We Reinvented The Wheel - July 27, 2016
How We Test MongoDB: Evergreen - June 1, 2015
Understanding gRPC Concepts, Use Cases & Best Practices - January 02, 2023
gRPC Web with .NET - December 10, 2020
Fluentd is an open source data collector for unified logging layer. (It) allows you to unify data collection and consumption for a better use and understanding of data.
CNCF Tools Overview: Fluentd – Unified Logging Layer - Feb 26, 2020
Online payment processing for internet businesses.
Stripe is a suite of payment APIs that powers commerce for online businesses of all sizes, including fraud prevention, and subscription management. Use Stripe's payment platform to accept and process payments online for easy-to-use commerce solutions.
Tuleap is an ALM (Application Lifecycle Management) tool, it is open source solution provided by enalean.
Azure DevOps, previously known as VSTS (Visual Studio Team Services), is the application lifecycle platform provided by Microsoft.
Wiki
Azure Boards
Azure Repositories (git)
: Build & Release
Azure Tests
Artifacts
Use [[_TOC_]]
to have an automatically generated table of content (more information on )
Automate, customize, and execute your software development workflows right in your repository with GitHub Actions. You can discover, create, and share actions to perform any job you'd like, including CI/CD, and combine actions in a completely customized workflow.
→
- August 8, 2019
Single responsibility principle
Interface segregation principle
Dependency inversion principle
: Object Oriented Design
: Don't Repeat Yourself
: Keep It Simple Stupid
: You Aren't Gonna Need It
The easiest way to explore and manipulate your MongoDB data
The GUI for MongoDB. Visually explore your data. Run ad hoc queries in seconds. Interact with your data with full CRUD functionality. View and optimize your query performance. Available on Linux, Mac, or Windows. Compass empowers you to make smarter decisions about indexing, document validation, and more.
→ ,
→
Navigate to , review and set the version and platform, then click "Download" to start the download.
For Windows, you'll have a file with a name like "mongodb-compass-1.16.3-win32-x64.exe", you just have to execute the exe file.
Built by MongoDB engineers, Ops Manager is the management platform that makes it easy to deploy, monitor, back up, and scale MongoDB on your own infrastructure.
→
Ops Manager is an incredible tool provided by MongoDB. You need to get a licence to use it in Production but the benefits clearly worth it.
You can see it as a dashboard opened to anyone inside your organization, where you can completely manage and automate your MongoDB instances, replica and sharding sets as well as giving many live insights about the usage and the data.
Node Package Manager
→
→
Install and use
Read the documentation about
Add another registry on MyGet: npm config set @mycompany:registry https://www.myget.org/F/mycompany/npm/
npm install @mycopmpany/[email protected]
Or add directly the zip url:
A message broker () is a component that is part of an IT infrastructure whose primarly goal is to receive messages and make them available to other components.
It is a way to decouple applications inside an information system and provide high performance.
Apache Kafka goal is building real-time data pipeline and streaming apps
is a multi-tenant cloud messaging service handling asynchronous operations
is an open source message broker, whose commercial version is managed by Pivotal Software
2014-06-03
2017-04-26
Fr 2018-03-27
One codebase. Any platform. Now in Vue, Angular, React.
An open source mobile UI toolkit for building high quality, cross-platform native and web app experiences. Move faster with a single code base, running everywhere with JavaScript and the Web.
→ ,
npm i ionic-angular
Ansible
Configuration Management
Yaml
Agentless
All
Modules, Playbooks
Bicep
Azure
Json
Azure
Chef
Pulumi
Configuration Management
Ruby
master/slave
All
Orchestration
HCL, Go
All
Providers, Modules
July 30
.NET Conf: Focus on Microservices
# checks Node.js version
node -v
# checks NPM version
npm -v
# clears the cache (in case of issues)
npm cache clean --force
# updates npm
npm install -g npm
npm update -g
# displays dependencies on one package (natives in this example)
npm ls natives
# displays peer dependencies
npm view [email protected] peerDependencies
# lists registries
npm config list registry
# resets to the default registry
npm config set registry https://registry.npmjs.org/
# installs the tool globally
npm install -g npm-check-updates
# runs the tool against a project (in the project root directory)
ncu -u
# installs the packages from the newly updated package.json file
npm install
{
"dependencies": {
"mypackage": "https://www.myget.org/F/mycompany/npm/mypackage/-/mypackage-1.0.0.tgz"
}
}
t
Open file finder
Ctrl
+ k
Navigate, search, and run commands directly from your keyboard
.
Open Visual Studio Code Web (https://github.com
will be replaced by https://github.dev
)
kind is a tool for running local Kubernetes clusters using Docker container “nodes”. kind was primarily designed for testing Kubernetes itself, but may be used for local development or CI.
# makes sure kind is available from the command line
kind version
# creates a cluster
kind create cluster
# gets clusters
kind get clusters
# sets kubectl context
kubectl cluster-info --context kind-kind
# looks at images
docker exec -it my-node-name crictl images
# builds an image
docker build -t my-custom-image:unique-tag ./my-image-dir
kind load docker-image my-custom-image:unique-tag
kubectl apply -f my-manifest-using-my-image:unique-tag
# deletes a cluster
kind delete cluster
Follow Using WSL2
Create the cluster config file
# cluster-config.yml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 30000
hostPort: 30000
protocol: TCP
Workaround on Ubuntu 20.04 to fix the error while creating the cluster (see issue #2323)
kind create cluster --config=cluster-config.yml --image kindest/node:v1.17.17
LoadBalancer Services using Kubernetes in Docker by Owain Williams
September 20, 2022
Kubernetes-native workflow engine supporting DAG and step-based workflows
OpenShift Blog > Creating an Argo Workflow With Vault Integration Using Helm - February 17, 2021
# Alpine
docker run --name postgres -p 5432:5432 -e POSTGRES_PASSWORD=mysecretpassword -d postgres:alpine
# Official
docker run --name postgres966 -p 5432:5432 -e POSTGRES_PASSWORD=mysecretpassword -d postgres:9.6.6
With Docker (see hub.docker.com)
docker pull dpage/pgadmin4
docker run -p 80:80 -e "[email protected]" -e "PGADMIN_DEFAULT_PASSWORD=SuperSecret" --name pgadmin4 -d dpage/pgadmin4
# open http://localhost and login
Backup can be done with pgAdmin (personally I prefer plain format to have human readble sql content)
Restore can be done with Docker: cat D:\Temp\dump.sql | docker exec -i postgres966 psql -U postgres
(you may have to add the missing roles entries, which are not exported such as CREATE ROLE mycompany SUPERUSER;
)
ALTER USER user WITH PASSWORD 'newpassword';
SELECT * FROM pg_catalog.pg_database;
SELECT
table_schema || '.' || table_name
FROM
information_schema.tables
WHERE
table_type = 'BASE TABLE'
AND
table_schema NOT IN ('pg_catalog', 'information_schema');
SELECT * FROM pg_catalog.pg_tables WHERE schemaname = 'gracethd';
SELECT *
FROM information_schema.columns
WHERE table_schema = 'gracethd'
AND table_name = 'adresse';
Distributed Transactions extending MongoDB’s multi-document ACID guarantees from replica sets to sharded clusters, enabling you to serve an ever broader range of use cases.
On-Demand Materialized Views using the new $merge operator. Caching the output of a large aggregation in a collection is a common pattern, and the new $merge operator lets you update those results efficiently instead of completely recalculating them.
Wildcard Indexes make it easy and natural to model highly heterogeneous collections like product catalogs, without sacrificing great index support. You simply define a filter that automatically indexes all matching fields, sub-documents, and arrays in a collection.
MongoDB Query Language enhancements such as more expressive updates, new math operators, and expanded regex support. update and findAndModify commands can now reference existing fields, and incorporate aggregation pipelines for even more expressivity.
Retryable Reads and Writes, reducing the complexity of writing code that handles transient cluster failures.
What`s new in MongoDB 4.2 (slides) - August 13, 2019
MongoDB 4.2 is now GA: Ready for your Production Apps - August 13, 2019
Experiments on GitHub: devpro/dotnetcore-logging
For a netstandard library
Add Microsoft.Extensions.Logging
to the project (do not add a strong dependency to a logging framework such as log4net or NLog!).
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFramework>netcoreapp2.0</TargetFramework>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Microsoft.Extensions.Logging" Version="2.0.0" />
</ItemGroup>
<!-- ... -->
</Project>
Add ILogger
dependency (IoC) to the class constructor and the related private field
using Microsoft.Extensions.Logging;
// ...
public class MyClass
{
private readonly ILogger _logger;
public MyClass(ILogger<PuppetFacterService> logger)
{
_logger = logger;
}
public MyMethod()
{
_logger.LogInformation("MyMethod called");
}
}
Homepage: serilog.net
Possible with Serilog
NLog.Redis not yet available for .NET Core (as of the 10th of May 2018).
A Versioned API, designed to preserve application behavior through upgrades
Upgrade Fearlessly with the MongoDB Versioned API - June 1st, 2021
Native time series collections and clustered indexing
Paginations 1.0: Time Series Collections in five minutes - October 21, 2021
Window functions and new temporal operators
Fast, unopinionated, minimalist web framework for Node.js
npm install express-generator
node_modules\express-generator\bin\express-cli.js -h
node_modules\express-generator\bin\express-cli.js myapp
The Six Principles For Resilient Evolvability - November, 2020
Enterprise-grade Serverless on your own terms. Kubernetes-based platform to deploy and manage modern serverless workloads.
→ knative.dev, github.com/knative
What is Knative? - January 8, 2019
1.0
Distributed tracing with Knative, OpenTelemetry and Jaeger - August 20, 2021
13.0
12.0
# install the CLI globally
npm install -g @angular/cli
ng new
Create a new Angular application with interactive questions
Computer network types
Local Area Network (LAN)
Storage Area Network (SAN)
Virtual Private Network (VPN)
Wide Area Network (WAN)
Wireless Local Area Network (WLAN)
Classless Inter-Domain Routing (CIDR)
Content Delivery Network (CDN)
Distributed Denial of Service (DDoS)
Domain Name System (DNS)
Gateway
Firewall
Quality of Service (QoS)
Load balancers
Network Address Translation (NAT)
Network topologies
Bus
Ring
Star
Mesh
Tree
Open Systems Interconnection (OSI) model
(1) Physical Layer
(2) Data Link Layer
(3) Network Layer
(4) Transport Layer
(5) Session Layer
(6) Presentation Layer
(7) Application Layer
→
TCP/IP
Internet Protocol (IP) addresses
Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) ports
→
...
Source:
A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings.
→
See also
Containerization is the packaging together of software code with all it’s necessary components like libraries, frameworks, and other dependencies so that they are isolated in their own “container”.
→
See also
Rancher Desktop
- October 28, 2021
- London, UK - 19 & 20 September -
Excellent summary:
Videos:
Talks:
"How to make Angular Fast" & & by
"How Angular Works" by
"Quantum facades" by
"Building Angular apps with internationalization (i18n) in mind | Naomi Meyer"
by
Our collaboration with standard committees, Chrome, and Bazel
Automating DX for faster Web
Intelligent tooling
Enabling best practices
ng deploy
Workshops:
→
Install [Ruby]
On Windows, download the latest version with DevKit from the and execute it, agree to run ridk install
at the end
Install jekyll gem
Run gem install bundler jekyll
Open the terminal at the root folder and run bundle exec jekyll serve
This code base was made by using the command: jekyll new my-website
See
# checks docker is working
docker run hello-world
# checks make utility is present
make --version
# creates a new projet
mkdir jekyll-site
cd jekyll-site
docker run -v $(pwd):/srv/jekyll jekyll/jekyll:latest jekyll new .
# starts web server (open http://localhost:4000/ in a browser)
docker run -v $(pwd):/srv/jekyll -p 4000:4000 -it jekyll/jekyll:latest jekyll serve
→ azure.microsoft.com/services/devops/pipelines
Caching and faster artifacts in Azure Pipelines - July 24, 2019
New IP firewall rules for Azure DevOps Services - May 31, 2019
Microsoft-hosted agents (public IP ranges)
Example of pipelines in MicrosoftDocs GitHub repositories.
.NET Blog article on How the .NET Team uses Azure Pipelines to produce Docker Images
Uploading to Codecov just got easier - November 13, 2019
By default, it won't work for Artifacts, you need to click on "..." in the permission pane of your feed and click on "Allow project-scoped builds".
Secure and share packages using feed permissions
- task: NuGetAuthenticate@0
displayName: 'Authenticate to NuGet feed'
- task: NuGetCommand@2
displayName: 'Push NuGet packages'
inputs:
command: 'push'
packagesToPush: '$(Build.ArtifactStagingDirectory)/**/*.nupkg;!$(Build.ArtifactStagingDirectory)/**/*.symbols.nupkg'
nuGetFeedType: 'internal'
publishVstsFeed: $(azure.artifact.feed.id)
allowPackageConflicts: true
Codebase
Dependencies
Config
Backing services
Build, release, run
Processes
Port binding
Concurrency
Disposability
Dev/prod parity
Logs
Admin processes
aim42 is the systematic approach to improve software systems and architectures
The C4 model for visualising software architecture: Context, Containers, Components and Code
Extreme programming
[Spike](https://en.wikipedia.org/wiki/Spike_(software_development))
A spike is a product-testing method (...) that uses the simplest possible program to explore potential solutions. It is used to determine how much work will be required to solve or work around a software issue. Typically, a 'spike test' involves gathering additional information or testing for easily reproduced edge cases. The term is used in agile software development approaches like Scrum or Extreme Programming.
TDD (Test Driven Development)
ITIL
ITIL 4: An A – Z Guide By Joe the IT Guy - Mar 21, 2019
Canary release
Martin Fowler website article - June 25, 2014
Lessons learned and best practices from Google and Waze - January 14, 2019
A/B testing
Helm 3: The package manager for Kubernetes. It is the best way to find, share, and use software built for Kubernetes.
Start with the installation guide.
On Windows, get the zip file from the Release page and extract the exe file to a folder defined in the PATH environment variable.
Make sure helm is available from the command line: helm version
.
Then follow the quickstart guide.
Add at least one repository (helm repo ls
), for instance helm repo add stable https://charts.helm.sh/stable
.
Run helm repo update
to update the repository.
You can look at what is available with helm search repo stable
.
Install the first chart with helm install stable/mysql --generate-name
.
The ouput of this command is very interesting:
MySQL can be accessed via port 3306 on the following DNS name from within your cluster: mysql-xxxxxxx.default.svc.cluster.local
To get your root password run: MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace default mysql-xxxxxxx -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo)
To connect to your database:
Run an Ubuntu pod that you can use as a client: kubectl run -i --tty ubuntu --image=ubuntu:18.04 --restart=Never -- bash -il
Install the mysql client: $ apt-get update && apt-get install mysql-client -y
Connect using the mysql cli, then provide your password: $ mysql -h mysql-xxxxxxx -p
To connect to your database directly from outside the K8s cluster: MYSQL_HOST=127.0.0.1, MYSQL_PORT=3306. Execute the following command to route the connection: kubectl port-forward svc/mysql-xxxxxxx 3306
and mysql -h ${MYSQL_HOST} -P${MYSQL_PORT} -u root -p${MYSQL_ROOT_PASSWORD}
.
As usual, look at the progress with kubectl get pods
("STATUS" column).
At the end, clean your cluster helm uninstall mysql-xxxxxxx
.
mychart ├── Chart.yaml ├── templates │ ├── deployment.yaml │ └── service.yaml └── values.yaml
Command
Action
helm show chart stable/xxxx
Get a simple idea of the features of chart stable/xxxx (stable/mysql for example)
helm list
See what has been released with Helm
helm help xxx
Get help message on xxx command (install for example)
helm ls
What has been released using Helm
helm uninstall <name>
Uninstall a release
Local Kubernetes, focused on application development & education
,
Follow the instructions given in the .
More information on .
Make sure Docker Desktop has allocated at least 3 Go of RAM.
Important: If you're on Windows, open a command window as admin.
Run:
(Optional) minikube config set vm-driver hyperv
to set the default driver (here )
minikube start
to start the Kubernetes node
minikube status
to get the overall status
minikube pause
to pause it
minikube stop
to stop it
Run minikube dashboard
to open the web dashboard.
Run kubectl config use-context minikube
to be able to use kubectl on your local Kubernetes instance.
Run minikube delete
and, if needed, delete the .kube
and .minikube
folder in your home directory.
Incorrect date (can lead to errors with Docker pull)
cd etcd-v3.4.12
etcd
# make sure Go is installed
# clone the repository
git clone https://github.com/etcd-io/etcd.git
cd etcd
# use the build script
./build
# In Windows we can't set the data directory (--mount type=bind,source=//d/ProgramData/etcd-data.tmp,destination=/etcd-data) because etcd checks folder permissions (700 versus 777), see https://github.com/etcd-io/etcd/blob/release-3.4/pkg/fileutil/fileutil.go
docker run -p 2379:2379 -p 2380:2380 --name etcd-gcr-v3.4.12 gcr.io/etcd-development/etcd:v3.4.12 /usr/local/bin/etcd --name s1 --data-dir /etcd-data --listen-client-urls http://0.0.0.0:2379 --advertise-client-urls http://0.0.0.0:2379 --listen-peer-urls http://0.0.0.0:2380 --initial-advertise-peer-urls http://0.0.0.0:2380 --initial-cluster s1=http://0.0.0.0:2380 --initial-cluster-token tkn --initial-cluster-state new --log-level info --logger zap --log-outputs stderr
docker exec etcd-gcr-v3.4.12 /bin/sh -c "/usr/local/bin/etcd --version"
docker exec etcd-gcr-v3.4.12 /bin/sh -c "/usr/local/bin/etcdctl version"
docker exec etcd-gcr-v3.4.12 /bin/sh -c "/usr/local/bin/etcdctl endpoint health"
docker exec etcd-gcr-v3.4.12 /bin/sh -c "/usr/local/bin/etcdctl put foo bar"
docker exec etcd-gcr-v3.4.12 /bin/sh -c "/usr/local/bin/etcdctl get foo"
docker exec etcd-gcr-v3.4.12 /bin/sh -c "/usr/local/bin/etcdctl del foo"
docker stop etcd-gcr-v3.4.12
docker rm etcd-gcr-v3.4.12
Command
Action
etcdctl member list
Lists all members in the cluster
# for Windows
SET ETCDCTL_API=3
# see https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/
etcdctl snapshot save etcd_snapshot.db
etcdctl --write-out=table snapshot status etcd_snapshot.db
# see https://github.com/etcd-io/etcd/blob/master/Documentation/op-guide/recovery.md#restoring-a-cluster
etcdctl snapshot restore etcd_snapshot.db --name m1 --initial-cluster m1=http://0.0.0.0:2380 --initial-cluster-token etcd-cluster-1 --initial-advertise-peer-urls http://0.0.0.0:2380 # will create m1.etcd folder
etcd --name m1 --data-dir m1.etcd --listen-client-urls http://0.0.0.0:2379 --advertise-client-urls http://0.0.0.0:2379 --listen-peer-urls http://0.0.0.0:2380
Command
Action
minikube service hello-minikube
Maunch a web browser on a service
minikube service xxx --url
Display url for a given service (xxx)
minikube config set memory 16384
Update default memory limit (2048 by default)
minikube addons list
Browse the catalog of easily installed Kubernetes services
minikube tunnel
Start a tunnel to create a routable IP for a "balanced" deployment
minikube start -p aged --kubernetes-version=v1.16.1
Create another cluster running an older Kubernetes release
minikube ip
Display Kubernetes IP
# Enable metrics-server (https://github.com/kubernetes-sigs/metrics-server)
minikube addons enable metrics-server
kubectl get apiservices
minikube ssh -- date
minikube ssh
date --set "12 Aug 2020 17:20:00"
exit
minikube ssh -- docker run -i --rm --privileged --pid=host debian nsenter -t 1 -m -u -n -i date -u $(date -u +%m%d%H%M%Y)
minikube ssh -- date
Martin Fowler website page - January 23, 2004
Microservices are small, modular, and independently deployable services. Docker containers (for Linux and Windows) simplify deployment and testing by bundling a service and its dependencies into a single unit, which is then run in an isolated environment.
Articles:
Read:
Definition on wikipedia
Starting point with Tackle Business Complexity in a Microservice with DDD and CQRS Patterns
Code examples:
If you're French, you can look at this article from Octo.
Feature flags are a great way to do continuous delivery with the latest source code and activate when needed new functionalities. But there is a cost that is described in an article from opensource.
Two standards are recommended:
gRPC
As of 2019, REST is still more widely used but gRPC contains great improvements and will be used more and more for new microservices.
You can easily find comparison between REST and gRPC on the internet, for example code.tutsplus.com. There is an interesting summary on docs.microsoft.com.
Refresh tokens: auth0.com
k3d is a lightweight wrapper to run k3s (Rancher Lab’s minimal Kubernetes distribution) in docker.
Download & install latest release (ref. k3d.io)
# runs installation script
wget -q -O - https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh | bash
k3d cluster create <mycluster>
Create a cluster
k3d cluster list
List the clusters
k3d cluster stop <mycluster>
Stops a cluster
k3d cluster start <mycluster>
Starts a cluster
k3d cluster delete <mycluster>
Delete a cluster
Create a cluster
# creates a cluster
k3d cluster create mycluster -p "8081:80@loadbalancer" -p "8082:443@loadbalancer" --agents 2
# displays cluster information (kubectl configuration is automatically updated and set to use the new cluster context)
kubectl cluster-info
# ensures coredns and traefik (ingress controller) are deployed by default (k3s behavior)
kubectl get deploy -n kube-system
# (optional) writes and uses specific kubectl configuration
export KUBECONFIG="$(k3d kubeconfig write mycluster)"
Deploy a basic workflow (ref. k3d Guides > Exposing Services)
# creates a nginx (web server) deployment
kubectl create deployment nginx --image=nginx
# exposes the deployment with a service
kubectl create service clusterip nginx --tcp=80:80
# provides an ingress to the service
cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx
annotations:
ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: nginx.dev.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx
port:
number: 80
EOF
# checks everything is ok
kubectl get svc,pod,deploy,ingress
# makes sure the website can be reached
curl localhost:8081/
Update hosts
file
127.0.0.1 nginx.dev.local
Make sure ingress is working
curl nginx.dev.local:8081/
Clean-up
# deletes the cluster
k3d cluster delete mycluster
CoreDNS configuration
# displays coredns configmap
kubectl -n kube-system get configmap coredns -o yaml
Use the power of .NET and C# to build full stack web apps without writing a line of JavaScript.
→ dotnet.microsoft.com/apps/aspnet/web-apps/blazor
Blazor Server in .NET Core 3.0 scenarios and performance - Oct 10, 2019
# create a new Blazor Server App project
dotnet new blazorserver -o <project-name>
# create a new Blazor WebAssembly App project
dotnet new blazorwasm -o <project-name>
# create a new Razor component
dotnet new razorcomponent -n <component-name> -o <folder>
# create a new Razor page
dotnet new page -n <page-name> -o <folder>
Jun 18, 2020
BlazorDay 2020
The .NET command-line interface (CLI) is a cross-platform toolchain for developing, building, running, and publishing .NET applications. The .NET CLI is included with the .NET SDK.
→ ,
Examples:
dotnet new webapi --output src/PalTracker --name PalTracker
will use the template "ASP.NET Core Web API"
dotnet new xunit --output test/PalTrackerTests --name PalTrackerTests
will use the template "xUnit Test Project"
dotnet new sln --name PalTracker
will use the template "Solution File"
Examples:
dotnet add test/PalTrackerTests reference src/PalTracker/PalTracker.csproj
dotnet add test/PalTrackerTests package Microsoft.AspNetCore.TestHost --version 2.2.0
Examples:
dotnet sln PalTracker.sln add src/PalTracker/PalTracker.csproj
Examples:
dotnet run --project src/PalTracker
Examples:
dotnet publish src/PalTracker --configuration Release
Examples:
dotnet test test/PalTrackerTests --filter PalTrackerTests.InMemoryTimeEntryRepositoryTest
See also:
dotnet -v
Display information on the installed version
dotnet new
View the available templates (see docs.microsoft.com)
dotnet add reference
Adds project-to-project (P2P) references (see docs.microsoft.com)
dotnet add package
Adds a package reference to a project file (see docs.microsoft.com)
dotnet sln
Modifies a .NET Core solution file (see docs.microsoft.com)
dotnet run
Runs source code without any explicit compile or launch commands (see docs.microsoft.com)
dotnet publish
Packs the application and its dependencies into a folder for deployment to a hosting system (see docs.microsoft.com)
dotnet test
Run the tests (see docs.microsoft.com)
One framework. Mobile & desktop.
→ angular.io, API
Angular has replaced AngularJS (aka Angular v1).
13.0
12.0
10.0
2020-06-25
,
NgRx: ngrx.io, Documentation
Visual Studio Code
Use Angular CLI
# create the application
ng new
# launch locally (open http://localhost:4200)
ng serve --open
# add material theme
ng add @angular/material
# create the first module
ng generate module layout
# create the home page component
ng generate component layout/home
Create a sonar-project.properties
file at the root folder of the application
sonar.host.url=https://sonarcloud.io
sonar.login=<token>
sonar.organization=<company>
sonar.projectKey=<projetKey>
sonar.projectName=<projectName>
sonar.projectVersion=1.0
sonar.sourceEncoding=UTF-8
sonar.sources=src
sonar.exclusions=**/node_modules/**,**/*.spec.ts,**/coverage/**,**/bin/**,**/obj/**
#sonar.tests=test
sonar.test.inclusions=**/*.spec.ts
sonar.typescript.lcov.reportPaths=src/WebApp/ClientApp/coverage/lcov.info
#sonar.dotnet.visualstudio.solution.file=Solution.Name.sln
Edit package.json
file
{{< highlight json >}} "scripts": { "sonar": "node_modules/sonar-scanner/bin/sonar-scanner.bat" }, "dependencies": { "sonar-scanner": "^3.1.0", "tslint-sonarts": "^1.8.0", } {{< /highlight >}}
Follow the procedure given at update.angular.io
Option 1: Angular Datatable
Historique de la recherche :
Home site: materializecss.com
Integration in an Angular project:
Clean and ok: How to use materialize-css with angular
Didn't work: How to use MaterializeCSS in Angular 2
Didn't also work: stanleyeosakul/angular-travelville
Didn't try: sherweb/ngx-materialize
~/.kube/config
is the local configuration file (contains all the contexts, information about the clusters and user credentials)
# get current context
kubectl config current-context
# display context configuration
kubectl config get-contexts
# change context
kubectl config use-context <cluster-name>
# display version
kubectl version
# display cluster information
kubectl cluster-info
# display cluster configuration
kubectl config get-clusters
# get health information for the control plane components (the scheduler, the controller manager and etcd)
kubectl get componentstatuses
# list all the nodes in the cluster and report their status and Kubernetes version
kubectl get nodes
# show the CPU and memory capacity of each node, and how much of each is currently in use
kubectl top pods
# view sereval resources at once
kubectl get deploy,rs,po,svc,ep
# create resources from a manifest file
kubectl create -f <filename>
# create or update resources from a manifest file
kubectl update -f <filename>
# delete resources from a manifest file
kubectl delete -f <filename>
# list all namespaces
kubectl get namespaces
# create a new namespace
kubectl create ns hello-there
# list pods of a specific namespace
kubectl get pods --namespace kube-system
# list pods of all namespaces
kubectl get pods -A
# get more information about a pod
kubectl describe pod
# get log information of a specific pod
kubectl logs
# get pod yaml definition
kubectl get pod -o yaml
# watch pods
watch kubectl get pod --all-namespaces
# desribe a pod
kubectl describe pod <pod-name> --namespace <namespace>
# get pod logs
kubectl logs [--tail=20] [--since=1h] <pod-name>
# display metrics about a pod and its containers
kubectl top pod <pod-name> --containers
# execute commands inside a pod (for investigation purpose)
kubectl exec -it <pod-name> -n <namespace> -- /bin/bash
# download or upload files from a container
kubectl cp my-file.txt <namespace>/<pod-name>:my-file.txt
kubectl cp <namespace>/<pod-name>:my-file.txt my-file.txt
# see all service accounts in all namespaces
kubectl get ServiceAccount -A
# see all secrets in all namespaces
kubectl get secrets -A
# create a CronJob
kubectl create cronjob my-cron --image=busybox --schedule="*/5 * * * *" -- echo hello
# update a CronJob
kubectl edit cronjob/my-cron
# update a CronJob with a specific IDE
KUBE_EDITOR="nano" kubectl edit cronjob/my-cron
# delete a CronJob
kubectl delete cronjob my-cron
kubectl get deployment
# see all services in all namespaces
kubectl get services -A
kubectl get events --sort-by=.metadata.creationTimestamp
# see all ingresses in all namespaces
kubectl get ingress -A
# see a resource definition
kubectl get ingress mymicroservice -o yaml
kubectl scale
kubectl port-forward xxx 8080:80
# runs a proxy to the Kubernetes API Server
kubectl proxy
# review agent pool specification
az aks show --resource-group myrgname --name myaksname --query agentPoolProfiles
# scale agent pool (2 nodes here)
az aks scale --resource-group myrgname --name myaksname --node-count 2 --query properties.provisioningState
gcloud container clusters create mycluster
gcloud container clusters list
kubectl get nodes
gcloud container clusters delete linuxfoundation
# find and delete pods
kubectl delete pods $(kubectl get pods -o=name | grep mypodname | sed "s/^.\{4\}//")
Issue
Advice
Pod with status CreateContainerConfigError
Look at the pod logs (kubectl logs podxxx
), the issue should be detailed there
Clean and simple cheat sheets to ease everyday work.
This reposiory gathers notes taken over the years, in Markdown format, in a Wiki/Knowledge base spirit. You're more than welcome to contribute (fork > branch > pull request)!
The online version is available at everyday-cheatsheets.docs.devpro.fr.
Single-board computers
wiki-tech.io: wiki in French
By default .NET Core Console applications reference very few elements.
These are good references to start with:
Microsoft.Extensions.DependencyInjection
Microsoft.Extensions.Logging
Microsoft.Extensions.Logging.Console
Microsoft.Extensions.Logging.Debug
Microsoft.Extensions.Configuration
Microsoft.Extensions.Configuration.Json
Json.NET
You can convert a Json to Xml:
// object is needed as the value is an array
var expected = JsonConvert.DeserializeXmlNode($"{{\"object\": {step.ExpectedResponseJsonString}}}", "root");
Memory: poster from Pro .NET Memory
You can find all the release notes on GitHub.
Start with .NET Tutorial - Hello World in 10 minutes then Learn .NET Core and the .NET Core SDK tools by exploring these Tutorials.
.NET Core runs really well on Docker.
dotnet/dotnet-docker is the GitHub repository for .NET Core Docker official images, which are now hosted on Microsoft Container Registry (MCR). To know more read this arcticle .NET Core Container Images now Published to Microsoft Container Registry published on March 15, 2019.
Julien Chable has an interesting blog to follow with articles on .NET Core and Docker.
Official images repository:
To review:
{{< highlight csharp >}} // Startup.cs public void ConfigureServices(IServiceCollection services) { services.AddHttpClient(apiClientConfiguration.HttpClientName) .ConfigurePrimaryHttpMessageHandler( x => new HttpClientHandler { Credentials = new CredentialCache { { new Uri(apiClientConfiguration.EndpointDomain), "NTLM", new NetworkCredential(_configuration.CustomApiClientUsername, _configuration.CustomApiClientPassword) } } }); } {{< /highlight >}}
"Deploy and Run a Distributed Cloud Native system using Istio, Kubernetes & .NET core" source code
.NET is the free, open-source, cross-platform framework for building modern apps and powerful cloud services
CoreCLR (Common Language Runtime) is the runtime for .NET Core. It includes the garbage collector, JIT compiler, primitive data types and low-level classes.
dotnet tool install -g dotnet-format
dotfuscator
is a tool, available in Community Edition, that can be installed from Visual Studio 2017
.
Readings:
From marketplace.visualstudio.com:
ILSpy (icsharpcode/ILSpy)
JustDecompile (telerik.com)
FxCop
StyleCop
Certificate management: Stackoverflow questions
NuGet is the package manager for .NET
→ ,
An essential tool for any modern development platform is a mechanism through which developers can create, share, and consume useful code. Often such code is bundled into "packages" that contain compiled code (as DLLs) along with other content needed in the projects that consume these packages.
For .NET, the Microsoft-supported mechanism for sharing code is NuGet, which defines how packages for .NET are created, hosted, and consumed, and provides the tools for each of those roles.
→
Self-update: nuget update -self
Create spec file: nuget spec
Create packages: nuget pack
→
Solutions available (list not exhaustive!):
1/
Pros: very easy to setup (less than 5 minutes), secure, free account (limited but more than enough for personal projects and evaluate), available on internet, works well with VSTS, no maintenance of infra cost
2/ with
Pros: natively integrated with VSTS Build, no maintenance of infra cost
3/ Host & deploy a web application referencing
Cons: seems like the only free solution BUT time needed to setup (creation of the solution, build & deploy) and maintain the server hosting the solution (+ infra cost), by default no backup or feed on internet
4/ Sonatype Nexus
Cons: community version do not manage NuGet feeds AND infra/maintenance cost plus feeds not on internet by default
Tips:
Do not forget to add a NuGet.config
file at the root of the solutions that will use the library (see and ). Otherwise you won't be able to do on build systems such as VSTS. Example:
→
Prerequisites:
NuGet server (needs to be defined):
In your VSTS project Settings section (wheel icon) go in "Services" page
In "Endpoints" click on "New Service Endpoint" and select "NuGet"
Fill the different elements (this is very easy if you are using MyGet, the feed URL and ApiKey have been displayed when you configured your feed)
Steps:
.NET Core > Restore: nothing particular here (don't forget the NuGet.config file if you are using other feeds than nuget.org)
.NET Core > Build: nothing particular here
.NET Core > Test: nothing particular here
.NET Core > dotnet pack: as of today (Feb 2018), you cannot use "NuGet pack in VSTS" but you can do a "dotnet pack" instead (ref discussion on ).
NuGet > NuGet push:
Target feed location = External NuGet server (including other accounts/collections)
Nuget Server = the name of the server you defined earlier
Tips:
By default, the NuGet package will always have the version 1.0.0 (). There are 3 solutions:
1/ Update your build definition in VSTS
2/ Update your project file and add VersionPrefix
and VersionSuffix
3/ Use MSBuild to control how you build your NuGet packages
ASP.NET Core is the open-source version of ASP.NET, that runs on Windows, Linux, macOS, and Docker.
→ ,
- 2018-05-23
Articles to review:
Go to Azure Portal and create an application in Azure Active Directory:
Examples: or run dotnet new mvc -o dotnetadauth --auth SingleOrg --client-id <clientId> --tenant-id <tenantId> --domain <domainName>
Edit csproj file
Edit appsettings.json
file
Edit Startup.cs
:
AutoMapper
Dapper
FluentAssertions
FluentValidation
ImageSharp
MediatR
Moq
Selenium WebDriver
xUnit
Azure Packages
Provided by Azure DevOps
MyGet
ProGet
NuGet Server
NuGet Gallery
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<packageSources>
<add key="nuget.org" value="https://api.nuget.org/v3/index.json" />
<add key="MyGet Devpro" value="https://www.myget.org/F/devpro-public/api/v3/index.json" />
</packageSources>
</configuration>
public IActionResult Post([FromBody]string action)
{
if (...)
{
return StatusCode(423);
}
return Ok(new ... {});
}
}
<PackageReference Include="Microsoft.AspNetCore.Authentication.AzureAD.UI" Version="2.1.1" />
"AzureAd": {
"Instance": "https://login.microsoftonline.com/",
"Domain": "<domainName>",
"TenantId": "<tenantId>",
"ClientId": "<clientId>",
"CallbackPath": "/signin-oidc"
},
// in ConfigureServices()
services.Configure<CookiePolicyOptions>(options =>
{
// This lambda determines whether user consent for non-essential cookies is needed for a given request.
options.CheckConsentNeeded = context => true;
options.MinimumSameSitePolicy = SameSiteMode.None;
});
services.AddAuthentication(AzureADDefaults.AuthenticationScheme)
.AddAzureAD(options => Configuration.Bind("AzureAd", options));
services
.AddMvc(options =>
{
var policy = new AuthorizationPolicyBuilder()
.RequireAuthenticatedUser()
.Build();
options.Filters.Add(new AuthorizeFilter(policy));
})
.SetCompatibilityVersion(CompatibilityVersion.Version_2_1);
// in Configure()
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
else
{
app.UseExceptionHandler("/Error");
app.UseHsts();
}
app.UseHttpsRedirection();
app.UseStaticFiles();
app.UseSpaStaticFiles();
app.UseCookiePolicy();
app.UseAuthentication();
MongoDB is a general purpose, document-based, distributed database built for modern application developers and for the cloud era.
→ mongodb.com, Github, developer.mongodb.com
Resources: presentations, webinars, white papers
Flexible schema
Performance
High Availability
Primary / Secondaries architecture
BSON storage (Binary JSON)
GeoJSON Objects support of GeoJson format for encoding a variety of geographic data structures
ACID transactions
A replica set is a group of mongod
processes that maintain the same data set. Replica sets provide redundancy and high availability.
A node of the replica set can be: Primary, Secondary, Arbitrer.
Read preference
Write concern
Read concern
Manual Query Plans Limits Analyze Query Performance
MongoDB indexes use a B-tree data structure.
Indexes gets have better read time but have an impact on write time.
Index Types:
Single Field
Compound Index
Multikey Index
Geospatial Index
Text Indexes
Hashed Indexes
Index Properties:
Unique Indexes
Partial Indexes
Sparse Indexes
TTL Indexes (Time To Live)
See also Performance Best Practices: Indexing - February 12, 2020
The storage engine that is used can be seen with the command db.serverStatus()
. It is a mongod
option: --storageEngine
.
In March 2015, there were two choices: MMAPv1 (original) and WiredTiger (new).
Wired Tiger is new in MongoDB 3.0. It is the first pluggable storage engine.
Features:
Document level locking
Compression
Snappy (default) - fast
Zlib - more compression
None
Lacks some pitfalls of MMAPv1
Performance gains
Background:
Built separately from MongoDB
Used by other's DB
Open source
Internals:
Stores data in btrees
Writes are initially separate, incorporated later
Two caches
WT caches - 1/2 of RAM (default)
FS cache
Checkpoint: every minute or more
No need for a journal
Quick Start: BSON Data Types - ObjectId
4.4
June 09, 2020
,
Go to the download center, select "Server", then "MongoDB Community Server" edition, chose the target platform and version and let the download complete.
You'll download a file like mongodb-win32-x86_64-2008plus-ssl-4.0.4.zip
Unzip the content of the archive in a program folder (for example D:\Programs
folder)
Rename the folder with something explicit like mongodb-community-4.0.4
You can either update your PATH globally on your machine or do it when you need it (or through a bat file)
SET PATH=%PATH%;D:\Programs\mongodb-community-4.0.4\bin
The following command must return a valid output
mongo --version
MongoDB shell version v4.0.4 git version: f288a3bdf201007f3693c58e140056adf8b04839 allocator: tcmalloc modules: none build environment: distmod: 2008plus-ssl distarch: x86_64 target_arch: x86_64
If you followed the steps to have the Mongo Shell, you'll be able to launch easily a MongoDB server locally (mongod
).
# make sure the data path exists
md /path/to/data
# start a basic MongoDB instance (default port 27017)
mongod --dbpath=/path/to/data
You can then connect with the MongoDB Shell:
mongo
Check the images already downloaded locally
docker images
Get the image for a specific version of MongoDB
docker image pull mongo:4.0.4
Start the container
docker run -d -p 27017:27017 --name mongodb404 mongo:4.0.4
docker run --name mongodb -d -p 27017:27017 mongo:4.4.6
mongod --dbpath "C:\my\path" --port 27017
# start a mongo shell and be on mycollection
mongo --port 27017 mycollection
# restore from dump folder into mydbname database
mongorestore -d mydbname dump
# monitor basic usage statistics for each collection
mongotop
# monitor basic MongoDB server statistics
mongostat
→ docs.mongodb.com/program/mongo
Introduced in June 2020, avalable as a standalone package, it provides a fully functional JavaScript/Node.js environment for interacting with MongoDB deployments. It can be used to test queries and operations directly with one database.
→ Documentation, Download, GitHub, Introduction
mongosh <connection_string>
Download the zip file export from docs.mongodb.com/manual/tutorial/aggregation-zip-code-data-set.
Import the data into your MongoDB server
# to be run in the folder containing the json file
mongoimport --db demoZip --collection zips --file zips.json
# it should generate the following output
# 2018-11-19T14:48:53.296+0100 connected to: localhost
# 2018-11-19T14:48:53.705+0100 imported 29353 documents
You can also import the data to your Atlas cluster
mongoimport --uri "mongodb+srv://user:[email protected]/demoZip" --collection zips --file zips.json
dbKoda holds a collection of sample data: github.com/SouthbankSoftware/dbkoda-data.
mtools is a collection of helper scripts to parse, filter, and visualize MongoDB log files (mongod, mongos). mtools also includes mlaunch, a utility to quickly set up complex MongoDB test environments on a local machine.
More information on github.com/rueckstiess/mtools, mongodb.com/blog/post/introducing-mtools.
You'll need Python (2 or 3) to install and use it.
# install with pip (Python)
pip install mtools