Clean and simple cheat sheets to ease everyday work.
This reposiory gathers notes taken over the years, in Markdown format, in a Wiki/Knowledge base spirit. You're more than welcome to contribute (fork > branch > pull request)!
The online version is available at everyday-cheatsheets.docs.devpro.fr.
Single-board computers
wiki-tech.io: wiki in French
The world’s most popular API gateway. Built for hybrid and multi-cloud, optimized for microservices and distributed architectures.
CNCF On-Demand Webinar: Kong Ingress Controller - Kubernetes Ingress On Steroids - September 23, 2021
→ keycloak.org, docs
Magalix Blog: What Is A Service Mesh? - Mar 10, 2020
[FR] Metanext > Service Mesh sur Kubernetes - May 30, 2020
Martin Fowler website page - January 23, 2004
Microservices are small, modular, and independently deployable services. Docker containers (for Linux and Windows) simplify deployment and testing by bundling a service and its dependencies into a single unit, which is then run in an isolated environment.
Articles:
Read:
Definition on wikipedia
Starting point with Tackle Business Complexity in a Microservice with DDD and CQRS Patterns
Code examples:
If you're French, you can look at this article from Octo.
Feature flags are a great way to do continuous delivery with the latest source code and activate when needed new functionalities. But there is a cost that is described in an article from opensource.
Two standards are recommended:
gRPC
As of 2019, REST is still more widely used but gRPC contains great improvements and will be used more and more for new microservices.
You can easily find comparison between REST and gRPC on the internet, for example code.tutsplus.com. There is an interesting summary on docs.microsoft.com.
Refresh tokens: auth0.com
Ansible
Configuration Management
Yaml
Agentless
All
Modules, Playbooks
Chef
Pulumi
A message broker (wikipedia) is a component that is part of an IT infrastructure whose primarly goal is to receive messages and make them available to other components.
It is a way to decouple applications inside an information system and provide high performance.
Apache Kafka goal is building real-time data pipeline and streaming apps
Azure Service Bus is a multi-tenant cloud messaging service handling asynchronous operations
RabbitMQ is an open source message broker, whose commercial version is managed by Pivotal Software
Fr Comparatif RabbitMQ / Kafka par Ippon 2018-03-27
Codebase
Dependencies
Config
Backing services
Build, release, run
Processes
Port binding
Concurrency
Disposability
Dev/prod parity
Logs
Admin processes
aim42 is the systematic approach to improve software systems and architectures
The C4 model for visualising software architecture: Context, Containers, Components and Code
Extreme programming
[Spike](https://en.wikipedia.org/wiki/Spike_(software_development))
A spike is a product-testing method (...) that uses the simplest possible program to explore potential solutions. It is used to determine how much work will be required to solve or work around a software issue. Typically, a 'spike test' involves gathering additional information or testing for easily reproduced edge cases. The term is used in agile software development approaches like Scrum or Extreme Programming.
TDD (Test Driven Development)
ITIL
ITIL 4: An A – Z Guide By Joe the IT Guy - Mar 21, 2019
Canary release
Martin Fowler website article - June 25, 2014
Lessons learned and best practices from Google and Waze - January 14, 2019
A/B testing
Computer network types
Local Area Network (LAN)
Storage Area Network (SAN)
Virtual Private Network (VPN)
Wide Area Network (WAN)
Wireless Local Area Network (WLAN)
Classless Inter-Domain Routing (CIDR)
Content Delivery Network (CDN)
Distributed Denial of Service (DDoS)
Domain Name System (DNS)
Gateway
Firewall
Quality of Service (QoS)
Load balancers
Network Address Translation (NAT)
Network topologies
Bus
Ring
Star
Mesh
Tree
Open Systems Interconnection (OSI) model
(1) Physical Layer
(2) Data Link Layer
(3) Network Layer
(4) Transport Layer
(5) Session Layer
(6) Presentation Layer
(7) Application Layer
TCP/IP
Internet Protocol (IP) addresses
Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) ports
...
Online payment processing for internet businesses.
Stripe is a suite of payment APIs that powers commerce for online businesses of all sizes, including fraud prevention, and subscription management. Use Stripe's payment platform to accept and process payments online for easy-to-use commerce solutions.
An all-in-one test automation solution.
Puppeteer is a Node library which provides a high-level API to control Chrome or Chromium over the DevTools Protocol. Puppeteer runs headless by default, but can be configured to run full (non-headless) Chrome or Chromium.
An open source load testing tool. Define user behaviour with Python code, and swarm your system with millions of simultaneous users.
Declarative continuous delivery with a fully-loaded UI
What's New in Argo CD 2.6 | StruggleOps Stream Highlights - February 7, 2023
Flux is a set of continuous and progressive delivery solutions for Kubernetes that are open and extensible.
November 2021 update - October 29, 2021
Cloud-native application life-cycle orchestration
Dynatrace - What is keptn, how it works and how to get started! - June 17, 2019
Test and Deploy with Confidence. Easily sync your GitHub projects with Travis CI and you’ll be testing your code in minutes!
Travis CI build config processing: GitHub
Azure DevOps, previously known as VSTS (Visual Studio Team Services), is the application lifecycle platform provided by Microsoft.
Wiki
Azure Boards
Azure Repositories (git)
Azure Pipelines: Build & Release
Azure Tests
Artifacts
Use [[_TOC_]]
to have an automatically generated table of content (more information on Syntax guidance for Markdown usage)
→ azure.microsoft.com/services/devops/pipelines
Caching and faster artifacts in Azure Pipelines - July 24, 2019
New IP firewall rules for Azure DevOps Services - May 31, 2019
Microsoft-hosted agents (public IP ranges)
Example of pipelines in MicrosoftDocs GitHub repositories.
.NET Blog article on How the .NET Team uses Azure Pipelines to produce Docker Images
Uploading to Codecov just got easier - November 13, 2019
By default, it won't work for Artifacts, you need to click on "..." in the permission pane of your feed and click on "Allow project-scoped builds".
Secure and share packages using feed permissions
t
Open file finder
Ctrl
+ k
Navigate, search, and run commands directly from your keyboard
.
Open Visual Studio Code Web (https://github.com
will be replaced by https://github.dev
)
Automate, customize, and execute your software development workflows right in your repository with GitHub Actions. You can discover, create, and share actions to perform any job you'd like, including CI/CD, and combine actions in a completely customized workflow.
→ docs
GitHub Actions now supports CI/CD, free for public repositories - August 8, 2019
The solution to share your best practices between developers
Tuleap is an ALM (Application Lifecycle Management) tool, it is open source solution provided by enalean.
Kubernetes-native workflow engine supporting DAG and step-based workflows
OpenShift Blog > Creating an Argo Workflow With Vault Integration Using Helm - February 17, 2021
Source: Kubernetes Documentation Concepts Overview
A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings.
→ Docker
See also Microsoft
Containerization is the packaging together of software code with all it’s necessary components like libraries, frameworks, and other dependencies so that they are isolated in their own “container”.
→ Red Hat
See also aqua
Rancher Desktop
Containers vs. Pods - Taking a Deeper Look - October 28, 2021
An industry-standard container runtime with an emphasis on simplicity, robustness and portability
jamessturtevant.com - April 01, 2021
kruyt.org - March 16, 2021
A spec for packaging distributed apps. CNABs facilitate the bundling, installing and managing of container-native apps — and their coupled services.
→ cnab.io
Docker Donates the cnab-to-oci Library to cnab.io - Feb 12, 2020
Dapr helps developers build event-driven, resilient distributed applications. Whether on-premises, in the cloud, or on an edge device, Dapr helps you tackle the challenges that come with building microservices and keeps your code platform agnostic.
→ dapr.io
Dapr joins CNCF Incubator - November 3, 2021
Envoy is an open source edge and service proxy, designed for cloud-native applications
Kong: Service Mesh 101: The Role of Envoy - August 26, 2021
Fluentd is an open source data collector for unified logging layer. (It) allows you to unify data collection and consumption for a better use and understanding of data.
CNCF Tools Overview: Fluentd – Unified Logging Layer - Feb 26, 2020
Enterprise-grade Serverless on your own terms. Kubernetes-based platform to deploy and manage modern serverless workloads.
→ knative.dev, github.com/knative
What is Knative? - January 8, 2019
Distributed tracing with Knative, OpenTelemetry and Jaeger - August 20, 2021
Cluster API is a Kubernetes sub-project focused on providing declarative APIs and tooling to simplify provisioning, upgrading, and operating multiple Kubernetes clusters.
Introducing the Cluster API Provider for Azure for Kubernetes cluster management - December 15, 2020
A distributed, reliable key-value store for the most critical data of a distributed system
Reference: etcd.io, GitHub, docs, API
Several options:
Grab a pre-built binary from the releases page
Build from the source (written in Go)
Docker (review latest version from releases page)
Command
Action
etcdctl member list
Lists all members in the cluster
Helm 3: The package manager for Kubernetes. It is the best way to find, share, and use software built for Kubernetes.
Start with the installation guide.
On Windows, get the zip file from the Release page and extract the exe file to a folder defined in the PATH environment variable.
Make sure helm is available from the command line: helm version
.
Then follow the quickstart guide.
Add at least one repository (helm repo ls
), for instance helm repo add stable https://charts.helm.sh/stable
.
Run helm repo update
to update the repository.
You can look at what is available with helm search repo stable
.
Install the first chart with helm install stable/mysql --generate-name
.
The ouput of this command is very interesting:
MySQL can be accessed via port 3306 on the following DNS name from within your cluster: mysql-xxxxxxx.default.svc.cluster.local
To get your root password run: MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace default mysql-xxxxxxx -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo)
To connect to your database:
Run an Ubuntu pod that you can use as a client: kubectl run -i --tty ubuntu --image=ubuntu:18.04 --restart=Never -- bash -il
Install the mysql client: $ apt-get update && apt-get install mysql-client -y
Connect using the mysql cli, then provide your password: $ mysql -h mysql-xxxxxxx -p
To connect to your database directly from outside the K8s cluster: MYSQL_HOST=127.0.0.1, MYSQL_PORT=3306. Execute the following command to route the connection: kubectl port-forward svc/mysql-xxxxxxx 3306
and mysql -h ${MYSQL_HOST} -P${MYSQL_PORT} -u root -p${MYSQL_ROOT_PASSWORD}
.
As usual, look at the progress with kubectl get pods
("STATUS" column).
At the end, clean your cluster helm uninstall mysql-xxxxxxx
.
mychart ├── Chart.yaml ├── templates │ ├── deployment.yaml │ └── service.yaml └── values.yaml
Command
Action
helm show chart stable/xxxx
Get a simple idea of the features of chart stable/xxxx (stable/mysql for example)
helm list
See what has been released with Helm
helm help xxx
Get help message on xxx command (install for example)
helm ls
What has been released using Helm
helm uninstall <name>
Uninstall a release
k3d is a lightweight wrapper to run k3s (Rancher Lab’s minimal Kubernetes distribution) in docker.
Download & install latest release (ref. k3d.io)
k3d cluster create <mycluster>
Create a cluster
k3d cluster list
List the clusters
k3d cluster stop <mycluster>
Stops a cluster
k3d cluster start <mycluster>
Starts a cluster
k3d cluster delete <mycluster>
Delete a cluster
Create a cluster
Deploy a basic workflow (ref. k3d Guides > Exposing Services)
Update hosts
file
Make sure ingress is working
Clean-up
CoreDNS configuration
kind is a tool for running local Kubernetes clusters using Docker container “nodes”. kind was primarily designed for testing Kubernetes itself, but may be used for local development or CI.
Follow Using WSL2
Create the cluster config file
Workaround on Ubuntu 20.04 to fix the error while creating the cluster (see issue #2323)
LoadBalancer Services using Kubernetes in Docker by Owain Williams
September 20, 2022
~/.kube/config
is the local configuration file (contains all the contexts, information about the clusters and user credentials)
Issue
Advice
Pod with status CreateContainerConfigError
Look at the pod logs (kubectl logs podxxx
), the issue should be detailed there
MetalLB is a load-balancer implementation for bare metal Kubernetes clusters, using standard routing protocols.
Ingresses and Load Balancers in Kubernetes with MetalLB and nginx-ingress by Adatlas - September 8, 2022
Local Kubernetes, focused on application development & education
minikube.sigs.k8s.io, kubernetes/minikube
Follow the instructions given in the Getting Started page.
More information on Installing Kubernetes with Minikube page.
Make sure Docker Desktop has allocated at least 3 Go of RAM.
Important: If you're on Windows, open a command window as admin.
Run:
(Optional) minikube config set vm-driver hyperv
to set the default driver (here Hyper-V driver)
minikube start
to start the Kubernetes node
minikube status
to get the overall status
minikube pause
to pause it
minikube stop
to stop it
Run minikube dashboard
to open the web dashboard.
Run kubectl config use-context minikube
to be able to use kubectl on your local Kubernetes instance.
Command
Action
minikube service hello-minikube
Maunch a web browser on a service
minikube service xxx --url
Display url for a given service (xxx)
minikube config set memory 16384
Update default memory limit (2048 by default)
minikube addons list
Browse the catalog of easily installed Kubernetes services
minikube tunnel
Start a tunnel to create a routable IP for a "balanced" deployment
minikube start -p aged --kubernetes-version=v1.16.1
Create another cluster running an older Kubernetes release
minikube ip
Display Kubernetes IP
Run minikube delete
and, if needed, delete the .kube
and .minikube
folder in your home directory.
Incorrect date (can lead to errors with Docker pull)
An open model for defining cloud native apps.
→ oam.dev
oam-dev/rudr is a "A Kubernetes implementation of the Open Application Model specification".
Create with freedom. Release with confidence.
Feature management lets you turn new features on/off in production with no need for redeployment. A software development best practice for releasing and validating new features.
MongoDB is a general purpose, document-based, distributed database built for modern application developers and for the cloud era.
→ mongodb.com, Github, developer.mongodb.com
Resources: presentations, webinars, white papers
Flexible schema
Performance
High Availability
Primary / Secondaries architecture
BSON storage (Binary JSON)
GeoJSON Objects support of GeoJson format for encoding a variety of geographic data structures
ACID transactions
A replica set is a group of mongod
processes that maintain the same data set. Replica sets provide redundancy and high availability.
A node of the replica set can be: Primary, Secondary, Arbitrer.
Read preference
Write concern
Read concern
Manual Query Plans Limits Analyze Query Performance
MongoDB indexes use a B-tree data structure.
Indexes gets have better read time but have an impact on write time.
Index Types:
Single Field
Compound Index
Multikey Index
Geospatial Index
Text Indexes
Hashed Indexes
Index Properties:
Unique Indexes
Partial Indexes
Sparse Indexes
TTL Indexes (Time To Live)
See also Performance Best Practices: Indexing - February 12, 2020
The storage engine that is used can be seen with the command db.serverStatus()
. It is a mongod
option: --storageEngine
.
In March 2015, there were two choices: MMAPv1 (original) and WiredTiger (new).
Wired Tiger is new in MongoDB 3.0. It is the first pluggable storage engine.
Features:
Document level locking
Compression
Snappy (default) - fast
Zlib - more compression
None
Lacks some pitfalls of MMAPv1
Performance gains
Background:
Built separately from MongoDB
Used by other's DB
Open source
Internals:
Stores data in btrees
Writes are initially separate, incorporated later
Two caches
WT caches - 1/2 of RAM (default)
FS cache
Checkpoint: every minute or more
No need for a journal
Quick Start: BSON Data Types - ObjectId
Go to the download center, select "Server", then "MongoDB Community Server" edition, chose the target platform and version and let the download complete.
You'll download a file like mongodb-win32-x86_64-2008plus-ssl-4.0.4.zip
Unzip the content of the archive in a program folder (for example D:\Programs
folder)
Rename the folder with something explicit like mongodb-community-4.0.4
You can either update your PATH globally on your machine or do it when you need it (or through a bat file)
The following command must return a valid output
MongoDB shell version v4.0.4 git version: f288a3bdf201007f3693c58e140056adf8b04839 allocator: tcmalloc modules: none build environment: distmod: 2008plus-ssl distarch: x86_64 target_arch: x86_64
If you followed the steps to have the Mongo Shell, you'll be able to launch easily a MongoDB server locally (mongod
).
You can then connect with the MongoDB Shell:
Check the images already downloaded locally
Get the image for a specific version of MongoDB
Start the container
→ docs.mongodb.com/program/mongo
Introduced in June 2020, avalable as a standalone package, it provides a fully functional JavaScript/Node.js environment for interacting with MongoDB deployments. It can be used to test queries and operations directly with one database.
→ Documentation, Download, GitHub, Introduction
Download the zip file export from docs.mongodb.com/manual/tutorial/aggregation-zip-code-data-set.
Import the data into your MongoDB server
You can also import the data to your Atlas cluster
dbKoda holds a collection of sample data: github.com/SouthbankSoftware/dbkoda-data.
mtools is a collection of helper scripts to parse, filter, and visualize MongoDB log files (mongod, mongos). mtools also includes mlaunch, a utility to quickly set up complex MongoDB test environments on a local machine.
More information on github.com/rueckstiess/mtools, mongodb.com/blog/post/introducing-mtools.
You'll need Python (2 or 3) to install and use it.
MongoDB Atlas is an integrated suite of data services centered around a cloud database designed to accelerate and simplify how you build with data.
→ MongoDB Atlas Database, MongoDB Cloud Services, docs, Resources
Atlas Compute Auto-Scaling and Data Infrastructure as Code at MongoDB.local London - September 26, 2019
Cloud providers: Amazon Web Services, Google Cloud Platform, Microsoft Azure
Optimizing Your MongoDB Deployment with Performance Advisor - November 22, 2022
The easiest way to explore and manipulate your MongoDB data
The GUI for MongoDB. Visually explore your data. Run ad hoc queries in seconds. Interact with your data with full CRUD functionality. View and optimize your query performance. Available on Linux, Mac, or Windows. Compass empowers you to make smarter decisions about indexing, document validation, and more.
→ mongodb.com/products/compass, GitHub
Navigate to mongodb.com/download-center/compass, review and set the version and platform, then click "Download" to start the download.
For Windows, you'll have a file with a name like "mongodb-compass-1.16.3-win32-x64.exe", you just have to execute the exe file.
→ evergreen-ci/evergreen, evergreen.mongodb.com
Testing Linearizability with Jepsen and Evergreen: “Call Me Continuously!” - February 16, 2017
Evergreen Continuous Integration: Why We Reinvented The Wheel - July 27, 2016
How We Test MongoDB: Evergreen - June 1, 2015
Distributed Transactions extending MongoDB’s multi-document ACID guarantees from replica sets to sharded clusters, enabling you to serve an ever broader range of use cases.
On-Demand Materialized Views using the new $merge operator. Caching the output of a large aggregation in a collection is a common pattern, and the new $merge operator lets you update those results efficiently instead of completely recalculating them.
Wildcard Indexes make it easy and natural to model highly heterogeneous collections like product catalogs, without sacrificing great index support. You simply define a filter that automatically indexes all matching fields, sub-documents, and arrays in a collection.
MongoDB Query Language enhancements such as more expressive updates, new math operators, and expanded regex support. update and findAndModify commands can now reference existing fields, and incorporate aggregation pipelines for even more expressivity.
Retryable Reads and Writes, reducing the complexity of writing code that handles transient cluster failures.
What`s new in MongoDB 4.2 (slides) - August 13, 2019
MongoDB 4.2 is now GA: Ready for your Production Apps - August 13, 2019
A Versioned API, designed to preserve application behavior through upgrades
Upgrade Fearlessly with the MongoDB Versioned API - June 1st, 2021
Native time series collections and clustered indexing
Paginations 1.0: Time Series Collections in five minutes - October 21, 2021
Window functions and new temporal operators
The Six Principles For Resilient Evolvability - November, 2020
Built by MongoDB engineers, Ops Manager is the management platform that makes it easy to deploy, monitor, back up, and scale MongoDB on your own infrastructure.
→ mongodb.com/products/ops-manager
Ops Manager is an incredible tool provided by MongoDB. You need to get a licence to use it in Production but the benefits clearly worth it.
You can see it as a dashboard opened to anyone inside your organization, where you can completely manage and automate your MongoDB instances, replica and sharding sets as well as giving many live insights about the usage and the data.
Redis is an open source (BSD licensed), in-memory data structure store, used as a database, cache, and message broker
→ redis.io
With Docker (see hub.docker.com)
Backup can be done with pgAdmin (personally I prefer plain format to have human readble sql content)
Restore can be done with Docker: cat D:\Temp\dump.sql | docker exec -i postgres966 psql -U postgres
(you may have to add the missing roles entries, which are not exported such as CREATE ROLE mycompany SUPERUSER;
)
One framework. Mobile & desktop.
→ angular.io, API
Angular has replaced AngularJS (aka Angular v1).
NgRx: ngrx.io, Documentation
Visual Studio Code
Use Angular CLI
Create a sonar-project.properties
file at the root folder of the application
Edit package.json
file
{{< highlight json >}} "scripts": { "sonar": "node_modules/sonar-scanner/bin/sonar-scanner.bat" }, "dependencies": { "sonar-scanner": "^3.1.0", "tslint-sonarts": "^1.8.0", } {{< /highlight >}}
Follow the procedure given at update.angular.io
Option 1: Angular Datatable
Historique de la recherche :
Home site: materializecss.com
Integration in an Angular project:
Clean and ok: How to use materialize-css with angular
Didn't work: How to use MaterializeCSS in Angular 2
Didn't also work: stanleyeosakul/angular-travelville
Didn't try: sherweb/ngx-materialize
ng new
Create a new Angular application with interactive questions
AngularConnect - London, UK - 19 & 20 September - Twitter
Excellent summary: All Talks
Videos:
Talks:
"How to make Angular Fast" Video & Slides & Sources by Misko Hevery
"How Angular Works" Video Slides by Kara Erickson
"Quantum facades" Slides by Sam Julien
"Building Angular apps with internationalization (i18n) in mind | Naomi Meyer" Video
2nd Day Keynote by Minko Gechev
Our collaboration with standard committees, Chrome, and Bazel
Automating DX for faster Web
Intelligent tooling
Enabling best practices
ng deploy
Workshops:
.NET is the free, open-source, cross-platform framework for building modern apps and powerful cloud services
CoreCLR (Common Language Runtime) is the runtime for .NET Core. It includes the garbage collector, JIT compiler, primitive data types and low-level classes.
dotfuscator
is a tool, available in Community Edition, that can be installed from Visual Studio 2017
.
Readings:
From marketplace.visualstudio.com:
ILSpy (icsharpcode/ILSpy)
JustDecompile (telerik.com)
FxCop
StyleCop
Certificate management: Stackoverflow questions
ASP.NET Core is the open-source version of ASP.NET, that runs on Windows, Linux, macOS, and Docker.
Articles to review:
Go to Azure Portal and create an application in Azure Active Directory: Integrating Azure AD into an ASP.NET Core web app
Examples: GitHub Azure-Samples/active-directory-dotnet-webapp-openidconnect-aspnetcore or run dotnet new mvc -o dotnetadauth --auth SingleOrg --client-id <clientId> --tenant-id <tenantId> --domain <domainName>
Edit csproj file
Edit appsettings.json
file
Edit Startup.cs
:
Use the power of .NET and C# to build full stack web apps without writing a line of JavaScript.
→ dotnet.microsoft.com/apps/aspnet/web-apps/blazor
Blazor Server in .NET Core 3.0 scenarios and performance - Oct 10, 2019
What’s next for System.Text.Json? - December 16th, 2020
Improvements in native code interop in .NET 5.0 - September 1st, 2020
App Trimming in .NET 5 - August 31st, 2020
Introducing the Half type! - August 31st, 2020
Introducing .NET 5 - May 6, 2020
.NET 5 Preview 1 - Mar 16, 2020
ASP.NET Core updates - Mar 16, 2020
Introducing C# Source Generators - April 29, 2020
Announcing .NET 6 - The Fastest .NET Yet - November 8, 2021
The Unified .NET 6 - October 29, 2021
The .NET command-line interface (CLI) is a cross-platform toolchain for developing, building, running, and publishing .NET applications. The .NET CLI is included with the .NET SDK.
dotnet -v
Display information on the installed version
dotnet new
View the available templates (see docs.microsoft.com)
Examples:
dotnet new webapi --output src/PalTracker --name PalTracker
will use the template "ASP.NET Core Web API"
dotnet new xunit --output test/PalTrackerTests --name PalTrackerTests
will use the template "xUnit Test Project"
dotnet new sln --name PalTracker
will use the template "Solution File"
dotnet add reference
Adds project-to-project (P2P) references (see docs.microsoft.com)
dotnet add package
Adds a package reference to a project file (see docs.microsoft.com)
Examples:
dotnet add test/PalTrackerTests reference src/PalTracker/PalTracker.csproj
dotnet add test/PalTrackerTests package Microsoft.AspNetCore.TestHost --version 2.2.0
dotnet sln
Modifies a .NET Core solution file (see docs.microsoft.com)
Examples:
dotnet sln PalTracker.sln add src/PalTracker/PalTracker.csproj
dotnet run
Runs source code without any explicit compile or launch commands (see docs.microsoft.com)
Examples:
dotnet run --project src/PalTracker
dotnet publish
Packs the application and its dependencies into a folder for deployment to a hosting system (see docs.microsoft.com)
Examples:
dotnet publish src/PalTracker --configuration Release
dotnet test
Run the tests (see docs.microsoft.com)
Examples:
dotnet test test/PalTrackerTests --filter PalTrackerTests.InMemoryTimeEntryRepositoryTest
See also:
By default .NET Core Console applications reference very few elements.
These are good references to start with:
Microsoft.Extensions.DependencyInjection
Microsoft.Extensions.Logging
Microsoft.Extensions.Logging.Console
Microsoft.Extensions.Logging.Debug
Microsoft.Extensions.Configuration
Microsoft.Extensions.Configuration.Json
Json.NET
You can convert a Json to Xml:
Memory: poster from Pro .NET Memory
You can find all the release notes on GitHub.
Start with .NET Tutorial - Hello World in 10 minutes then Learn .NET Core and the .NET Core SDK tools by exploring these Tutorials.
.NET Core runs really well on Docker.
dotnet/dotnet-docker is the GitHub repository for .NET Core Docker official images, which are now hosted on Microsoft Container Registry (MCR). To know more read this arcticle .NET Core Container Images now Published to Microsoft Container Registry published on March 15, 2019.
Julien Chable has an interesting blog to follow with articles on .NET Core and Docker.
Official images repository:
To review:
{{< highlight csharp >}} // Startup.cs public void ConfigureServices(IServiceCollection services) { services.AddHttpClient(apiClientConfiguration.HttpClientName) .ConfigurePrimaryHttpMessageHandler( x => new HttpClientHandler { Credentials = new CredentialCache { { new Uri(apiClientConfiguration.EndpointDomain), "NTLM", new NetworkCredential(_configuration.CustomApiClientUsername, _configuration.CustomApiClientPassword) } } }); } {{< /highlight >}}
"Deploy and Run a Distributed Cloud Native system using Istio, Kubernetes & .NET core" source code
Experiments on GitHub: devpro/dotnetcore-logging
For a netstandard library
Add Microsoft.Extensions.Logging
to the project (do not add a strong dependency to a logging framework such as log4net or NLog!).
Add ILogger
dependency (IoC) to the class constructor and the related private field
Homepage: serilog.net
Possible with Serilog
NLog.Redis not yet available for .NET Core (as of the 10th of May 2018).
NuGet is the package manager for .NET
→ nuget.org, learn.microsoft.com
An essential tool for any modern development platform is a mechanism through which developers can create, share, and consume useful code. Often such code is bundled into "packages" that contain compiled code (as DLLs) along with other content needed in the projects that consume these packages.
For .NET, the Microsoft-supported mechanism for sharing code is NuGet, which defines how packages for .NET are created, hosted, and consumed, and provides the tools for each of those roles.
→ docs.microsoft.com/what-is-nuget
Selenium WebDriver
Self-update: nuget update -self
Create spec file: nuget spec
Create packages: nuget pack
Azure Packages
Provided by Azure DevOps
ProGet
NuGet Server
NuGet Gallery
Solutions available (list not exhaustive!):
1/ MyGet
Pros: very easy to setup (less than 5 minutes), secure, free account (limited but more than enough for personal projects and evaluate), available on internet, works well with VSTS, no maintenance of infra cost
2/ VSTS with Package Management VSTS extension
Pros: natively integrated with VSTS Build, no maintenance of infra cost
3/ Host & deploy a web application referencing NuGet.Server
Cons: seems like the only free solution BUT time needed to setup (creation of the solution, build & deploy) and maintain the server hosting the solution (+ infra cost), by default no backup or feed on internet
4/ Sonatype Nexus
Cons: community version do not manage NuGet feeds AND infra/maintenance cost plus feeds not on internet by default
Tips:
Do not forget to add a NuGet.config
file at the root of the solutions that will use the library (see Configuring NuGet behavior and NuGet.Config reference). Otherwise you won't be able to do dotnet restore on build systems such as VSTS. Example:
Prerequisites:
NuGet server (needs to be defined):
In your VSTS project Settings section (wheel icon) go in "Services" page
In "Endpoints" click on "New Service Endpoint" and select "NuGet"
Fill the different elements (this is very easy if you are using MyGet, the feed URL and ApiKey have been displayed when you configured your feed)
Steps:
.NET Core > Restore: nothing particular here (don't forget the NuGet.config file if you are using other feeds than nuget.org)
.NET Core > Build: nothing particular here
.NET Core > Test: nothing particular here
.NET Core > dotnet pack: as of today (Feb 2018), you cannot use "NuGet pack in VSTS" but you can do a "dotnet pack" instead (ref discussion on github).
NuGet > NuGet push:
Target feed location = External NuGet server (including other accounts/collections)
Nuget Server = the name of the server you defined earlier
Tips:
By default, the NuGet package will always have the version 1.0.0 (question raised on Stackoverflow). There are 3 solutions:
1/ Update your build definition in VSTS
2/ Update your project file and add VersionPrefix
and VersionSuffix
3/ Use MSBuild to control how you build your NuGet packages
Understanding gRPC Concepts, Use Cases & Best Practices - January 02, 2023
gRPC Web with .NET - December 10, 2020
One codebase. Any platform. Now in Vue, Angular, React.
An open source mobile UI toolkit for building high quality, cross-platform native and web app experiences. Move faster with a single code base, running everywhere with JavaScript and the Web.
→ ionicframework.com, GitHub Twitter
npm i ionic-angular
ionic-angular
→ Jekyll
Install [Ruby]
On Windows, download the latest version with DevKit from the Download page and execute it, agree to run ridk install
at the end
Install jekyll gem
Run gem install bundler jekyll
Open the terminal at the root folder and run bundle exec jekyll serve
This code base was made by using the command: jekyll new my-website
See alcher.dev
Node.js is a JavaScript runtime built on Chrome's V8 JavaScript engine.
Fast, unopinionated, minimalist web framework for Node.js
Node Package Manager
Install and use npm-check-updates
Read the documentation about MyGet NPM support
Add another registry on MyGet: npm config set @mycompany:registry https://www.myget.org/F/mycompany/npm/
npm install @mycopmpany/mypackage@1.0.1
Or add directly the zip url: