Author: ie9kheio7fjf

  • Optimizing_Merchant_Vessels

    Merchant Vessel Safety and Performance Enhancement

    Introduction

    The Merchant Vessel Safety and Performance Enhancement with Machine Learning project is dedicated to improving the safety, efficiency, and sustainability of merchant vessels by applying advanced machine learning techniques. This comprehensive README provides detailed insights into the project’s features, installation steps, and usage guidelines.

    Features

    • Predictive Maintenance: Early detection of equipment failures to minimize downtime and prevent accidents.
    • Fuel Consumption Optimization: Real-time adjustments for optimal fuel usage based on vessel conditions.
    • Route Planning and Optimization: Intelligent route recommendations considering weather, traffic, and efficiency.
    • Crew Performance Monitoring: Continuous assessment of crew well-being and performance for safer operations.
    • Regulatory Compliance: Ensuring adherence to maritime regulations and safety standards.
    • Environmental Impact Reduction: Minimizing the vessel’s ecological footprint by reducing emissions.
    • Continuous Learning and Feedback Loop: Iterative improvements based on real-world outcomes.
    • Real-time Decision Support: Providing actionable recommendations for both crew and operators.
    • Scalability: Adaptable to different vessel types and configurations.

    Project Overview

    The Merchant Vessel Safety and Performance Enhancement project harnesses advanced machine learning techniques to enhance the safety, operational efficiency, and sustainability of merchant vessels. It accomplishes this by continuously analyzing real-time data from sensors and ship systems and delivering actionable insights to vessel operators.

    Screenshot 2023-10-05 at 21 20 03 Screenshot 2023-10-05 at 21 20 51

    Data Collection

    • Real-time Data Sources: The project relies on real-time data from sensors and ship systems, encompassing engine performance, weather conditions, cargo load, crew activity, and more.

    • Data Pipeline: A data pipeline is established to collect, transmit, and preprocess data. Data collection components on the vessel are configured to transmit data to a designated data processing infrastructure.

    Data Preprocessing

    • Data Cleaning: Raw sensor data is meticulously cleaned to eliminate noise, outliers, and data quality issues.

    • Feature Engineering: The project extracts relevant features and performs transformations to prepare the data for machine learning analysis.

    Machine Learning Analysis

    • Machine Learning Models: ML models, trained on historical data, detect patterns, anomalies, and correlations within real-time data.

    • Predictive Maintenance: ML models predict equipment failures and maintenance requirements by continuously monitoring critical equipment.

    • Fuel Consumption Optimization: Models analyze real-time data to determine optimal fuel consumption settings based on vessel conditions.

    • Route Planning and Optimization: ML models provide route recommendations by considering real-time weather data, maritime traffic, and other variables.

    • Crew Performance Monitoring: ML models assess crew performance and safety by identifying signs of fatigue or safety concerns.

    Real-time Decision Support

    • User Interface: A user-friendly interface or dashboard presents real-time insights, recommendations, and alerts generated by the ML models.

    • Actionable Recommendations: ML models provide actionable recommendations for vessel operators, crew members, and stakeholders.

    Continuous Learning and Feedback Loop

    • Iterative Process: The project establishes an iterative feedback loop, continually learning from both successful and unsuccessful recommendations.

    • Monitoring Impact: The project tracks the impact of its recommendations on vessel safety, efficiency, and sustainability, evaluating the effectiveness of actions taken in response to recommendations.

    Compliance and Reporting

    • Regulatory Compliance: The project ensures compliance with maritime regulations and safety standards. It also focuses on environmental compliance to reduce the vessel’s ecological footprint.

    • Reporting: Comprehensive reports are generated for vessel operators and stakeholders, showcasing performance improvements, safety enhancements, and environmental impact reductions achieved through the project.

    Conclusion

    In conclusion, the Merchant Vessel Safety and Performance Enhancement project with Machine Learning is an advanced system that employs real-time data analysis, machine learning, and continuous learning to boost the safety, efficiency, and sustainability of merchant vessels. It delivers valuable insights and recommendations to vessel operators, contributing to safer and more cost-effective maritime operations.

    Visit original content creator repository
  • save-file-converter

    Save File Converter

    Web-based tool to convert save files from retro game consoles to different formats

    Save file conversion

    Available at https://savefileconverter.com

    Upcoming features

    Contact

    If you have questions, need help, or have comments or suggestions, please hop on Discord: https://discord.gg/wtJ7xUKKTR

    Or email savefileconverter (at) gmail (dot) com

    Donations

    Everything on this site is free and open source with no advertising. If you find the site helpful and want to donate you can here:

    Donate

    Emulators with incompatible save formats

    Save file formats

    Cart reader notes

    • Retrode2
      • Genesis: SRAM/FRAM saves are byte expanded by doubling: “HELLO” becomes “HHEELLLLOO” rather than ” H E L L O” like in many emulators/flash carts
    • Retroblaster
      • Same as Retrode2

    GBA save file size difficulty

    Real-Time Clock save format

    Some platforms (e.g. some MiSTer cores) append RTC data to the end of a save file. The above link describes a common format for RTC data.

    PSP decompiling

    Offline use

    Occassionally there’s a need to use the tool offline, such as when you’ll be without an Internet connection for an extended period. There’s 2 methods to achieve this:

    Method 1: Use a website saving tool

    You can’t just right click on the page and select Save As… because the site is divided internally into many different files, and that will only download some of them.

    Google website saving tool or something similar to find an up-to-date list of such tools.

    Method 2: Build it locally (for people comfortable with the command line and development tools)

    You may need to modify some of these steps depending on your development environment, but this should give you the general idea.

    MacOS/Linux

    Install homebrew: https://brew.sh/

    brew install yarn
    brew install git
    

    Then proceed to the Common section

    Windows

    Find an equivalent package manager to homebrew, and use it to install git and yarn (or install them and their dependencies manually: git: https://github.com/git-guides/install-git, yarn: https://yarnpkg.com/getting-started/install)

    Then proceed to the Common section

    Common

    git clone git@github.com:euan-forrester/save-file-converter.git
    cd save-file-converter/frontend
    yarn install
    yarn serve
    

    Then open http://localhost:8080/ in your browser.

    Note that you’ll have to keep the command line window open with yarn serve running for as long as you want to access the site.

    Internet archive

    If you need to, you can also access the site via the Internet archive here: https://web.archive.org/web/https://savefileconverter.com/

    Visit original content creator repository
  • silverstripe-maintenance

    Silverstripe Maintenance

    CI Silverstripe supported module

    Overview

    The Silverstripe Maintenance module reduces your maintenance related work.

    UI Preview

    Requirements

    • Requires the composer.json and composer.lock files to be available and readable in the environment you plan to use this module. All information is based on these files.
    • The queuedjobs module updates metadata on your installed modules in the background. You need to configure it to run those jobs.
    • For the optional update checkers, the webserver environment needs to be able to contact external information sources through network requests
    • SilverStripe:
      • Maintenance ^2.2: Silverstripe ^4.4
      • Maintenance ~2.1.0: Silverstripe 4.0-4.3
      • Maintenance: ^1.0: Silverstripe 3.x

    Suggested Modules

    By default, the module will read your installed modules, and present them as a report in the CMS under admin/reports.

    In order to get information about potential updates to these modules, we recommend the installation of the following additional module:

    The previously recommended silverstripe-composer-security-checker module can’t work anymore and isn’t recommended to be used anymore.

    Installation

    Option 1 (recommended): Install the maintenance package and suggested dependency

    composer require bringyourownideas/silverstripe-maintenance bringyourownideas/silverstripe-composer-update-checker
    

    Option 2 (minimal): Install only the maintenance package without any update checks

    composer require bringyourownideas/silverstripe-maintenance
    

    Build schema and queue an initial job to populate the database:

    sake dev/build
    

    If you haven’t already, you need to configure the job queue to update module metadata in the background. By default, this happens every day, but can be configured to run at different intervals through YAML config:

    BringYourOwnIdeas\Maintenance\Jobs\CheckForUpdatesJob:
      reschedule_delay: '+1 hour'

    Manually running tasks

    By default, tasks are run through a job queue. You can also choose to manually refresh via the command line.

    Run the update task (includes the update-checker)

    sake dev/tasks/UpdatePackageInfoTask
    

    How your composer.json influences the report

    The report available through the CMS shows “Available” and “Latest” versions (see user guide). The version recommendations in those columns depend on your composer.json configuration. When setting tight constraints (e.g. silverstripe/framework:4.3.2@stable), newer releases don’t show up as expected. We recommend to have looser constraints by default (e.g. silverstripe/framework:^4.3). When the “Latest” version shows dev-master, it likely means that you have "minimum-stability": "dev" in your composer.json.

    Documentation

    Please see the user guide section.

    Contributing

    Contributions are welcome! Create an issue, explaining a bug or propose development ideas. Find more information on contributing in the Silverstripe developer documentation.

    Reporting Issues

    Please create an issue for any bugs you’ve found, or features you’re missing.

    Visit original content creator repository
  • ledger-stellar

    Ledger Stellar App

    Compilation & tests Swap function tests


    NOTE

    This repository is now archived because it has been merged into LedgerHQ/app-stellar.


    Introduction

    This is the wallet app for the Ledger Nano S, Ledger Nano S Plus and Ledger Nano X that makes it possible to store Stellar-based assets on those devices and generally sign any transaction for the Stellar network.

    Documentation

    This app follows the specification available in the ./docs folder.

    SDK

    You can communicate with the app through the following libraries:

    Building and installing

    If not for development purposes, you should install this app via Ledger Live.

    To build and install the app on your Nano S or Nano S Plus you must set up the Ledger build environments. Please follow the load the application instructions at the Ledger developer portal.

    Additionaly, install this dependency:

    sudo apt install libbsd-dev

    The command to compile and load the app onto the device is:

    make load

    To remove the app from the device do:

    make delete

    Testing

    This project provides unit tests, integration tests and end-to-end tests, unit tests are located under the ./tests_unit folder, and the integration tests and end-to-end tests are located under the ./tests_zemu folder.

    During development, we recommend that you run the unit test first, as it takes less time to run, and then run the other tests after the unit test has run successfully.

    Unit testing

    The ./tests_unit directory contains files for testing the utils, the xdr transaction parser, the screen formatter and the swap function.

    They require the Node.js, cmocka unit testing framework, CMake and libbsd to be installed:

    sudo apt install libcmocka-dev cmake libbsd-dev

    It is recommended to use nvm to install the latest LTS version of Node.js

    To build and execute the tests, run the following command:

    make tests-unit

    Integration testing and end-to-end testing

    Testing is done via the open-source framework zemu.

    In order to run these tests, you need to install Docker in addition to the dependencies mentioned in Unit testing.

    To build and execute the tests, run the following commands:

    make tests-zemu

    To run a specific test first, please run the following commands:

    cd tests_zemu
    npm run test -- -t "{testCaseName}"
    Visit original content creator repository
  • hrr_rb_ssh

    HrrRbSsh

    Build Status Maintainability Test Coverage Gem Version

    hrr_rb_ssh is a pure Ruby SSH 2.0 server and client implementation.

    With hrr_rb_ssh, it is possible to write an SSH server easily, and also possible to write an original server side application on secure connection provided by SSH protocol. And it supports to write SSH client as well.

    NOTE: ED25519 public key algorithm is now separated from hrr_rb_ssh. Please refer to hrr_rb_ssh-ed25519.

    Table of Contents

    Installation

    Add this line to your application’s Gemfile:

    gem 'hrr_rb_ssh'

    And then execute:

    $ bundle
    

    Or install it yourself as:

    $ gem install hrr_rb_ssh
    

    Usage

    Requiring hrr_rb_ssh library

    First of all, hrr_rb_ssh library needs to be loaded.

    require 'hrr_rb_ssh'

    Logging

    IMPORTANT: DEBUG log level outputs all communications between local and remote in human-readable plain-text including password and any secret. Be careful to use logging.

    The library provides logging functionality. To enable logging in the library, you are to give a logger to Server.new or Client.new.

    HrrRbSsh::Server.new options, logger: logger

    or

    HrrRbSsh::Client.new target, options, logger: logger

    Where, the logger variable can be an instance of standard Logger class or user-defined logger class. What the library requires for logger variable is that the logger instance responds to #fatal, #error, #warn, #info and #debug with the following syntax.

    logger.fatal(progname){ message }
    logger.error(progname){ message }
    logger.warn(progname){ message }
    logger.info(progname){ message }
    logger.debug(progname){ message }

    For instance, logger variable can be prepared like below.

    logger = Logger.new STDOUT
    logger.level = Logger::INFO

    Writing standard SSH server

    Starting server application

    The library is to run on a socket IO. To start SSH server, running a server IO and accepting a connection are required. The 10022 port number is just an example.

    options = Hash.new
    server = TCPServer.new 10022
    loop do
      Thread.new(server.accept) do |io|
        pid = fork do
          begin
            server = HrrRbSsh::Server.new options
            server.start io
          ensure
            io.close
          end
        end
        io.close
        Process.waitpid pid
      end
    end

    Where, an options variable is an instance of Hash, which has optional (or sometimes almost necessary) values.

    Registering pre-generated secret keys for server host key

    By default, server host keys are generated everytime the gem is loaded. To use pre-generated keys, it is possible to register the keys in HrrRbSsh::Transport through options variable. The secret key value must be PEM or DER format string. The below is an example of registering ecdsa-sha2-nistp256 secret key. The supported server host key algorithms are listed later in this document.

    options['transport_server_secret_host_keys'] = {}
    options['transport_server_secret_host_keys']['ecdsa-sha2-nistp256'] = <<-'EOB'
    -----BEGIN EC PRIVATE KEY-----
    MHcCAQEEIFFtGZHk6A8anZkLCJan9YBlB63uCIN/ZcQNCaJout8loAoGCCqGSM49
    AwEHoUQDQgAEk8m548Xga+XGEmRx7P71xGlxCfgjPj3XVOw+fXPXRgA03a5yDJEp
    OfeosJOO9twerD7pPhmXREkygblPsEXaVA==
    -----END EC PRIVATE KEY-----
    EOB

    Defining authentications

    By default, any authentications get failed. To allow users to login to the SSH service, at least one of the authentication methods must be defined and registered into the instance of HrrRbSsh::Authentication through options variable.

    The library defines a sort of strategies to implement handling authentication.

    Single authentication

    Each authenticator returns true (or HrrRbSsh::Authentication::SUCCESS) or false (or HrrRbSsh::Authentication::FAILURE). When it is true, the user is accepted. When it is false, the user is not accepted and a subsequent authenticator is called.

    Password authentication

    Password authentication is the most simple way to allow users to login to the SSH service. Password authentication requires user-name and password.

    To define a password authentication, the HrrRbSsh::Authentication::Authenticator.new { |context| ... } block is used. When the block returns true, then the authentication succeeded.

    auth_password = HrrRbSsh::Authentication::Authenticator.new { |context|
      user_and_pass = [
        ['user1',  'password1'],
        ['user2',  'password2'],
      ]
      user_and_pass.any? { |user, pass|
        context.verify user, pass
      }
    }
    options['authentication_password_authenticator'] = auth_password

    The context variable in password authentication context provides the followings.

    • #username : The username that a remote user tries to authenticate
    • #password : The password that a remote user tries to authenticate
    • #variables : A hash instance that is shared in each authenticator and subsequent session channel request handlers
    • #vars : The same object that #variables returns
    • #verify(username, password) : Returns true when username and password arguments match with the context’s username and password. Or returns false when username and password arguments don’t match.
    Publickey authentication

    The second one is public key authentication. Public key authentication requires user-name, public key algorithm name, and PEM or DER formed public key.

    To define a public key authentication, the HrrRbSsh::Authentication::Authenticator.new { |context| ... } block is used as well. When the block returns true, then the authentication succeeded as well. However, context variable behaves differently.

    auth_publickey = HrrRbSsh::Authentication::Authenticator.new { |context|
      username = ENV['USER']
      authorized_keys = HrrRbSsh::Compat::OpenSSH::AuthorizedKeys.new(File.read(File.join(Dir.home, '.ssh', 'authorized_keys')))
      authorized_keys.any?{ |public_key|
        context.verify username, public_key.algorithm_name, public_key.to_pem
      }
    }
    options['authentication_publickey_authenticator'] = auth_publickey

    The context variable in public key authentication context provides the #verify method. The #verify method takes three arguments; username, public key algorithm name and PEM or DER formed public key.

    And public keys that is in OpenSSH public key format is now available. To use OpenSSH public keys, it is easy to use $USER_HOME/.ssh/authorized_keys file.

    Keyboard-interactive authentication

    The third one is keyboard-interactive authentication. This is also known as challenge-response authentication.

    To define a keyboard-interactive authentication, the HrrRbSsh::Authentication::Authenticator.new { |context| ... } block is used as well. When the block returns true, then the authentication succeeded as well. However, context variable behaves differently.

    auth_keyboard_interactive = HrrRbSsh::Authentication::Authenticator.new { |context|
      user_name        = 'user1'
      req_name         = 'demo keyboard interactive authentication'
      req_instruction  = 'demo instruction'
      req_language_tag = ''
      req_prompts = [
        #[prompt[n], echo[n]]
        ['Password1: ', false],
        ['Password2: ', true],
      ]
      info_response = context.info_request req_name, req_instruction, req_language_tag, req_prompts
      context.username == user_name && info_response.responses == ['password1', 'password2']
    }
    options['authentication_keyboard_interactive_authenticator'] = auth_keyboard_interactive

    The context variable in keyboard-interactive authentication context does NOT provides the #verify method. Instead, #info_request method is available. Since keyboard-interactive authentication has multiple times interactions between server and client, the values in responses needs to be verified respectively.

    The #info_request method takes four arguments: name, instruction, language tag, and prompts. The name, instruction, and language tag can be empty string. The prompts needs to have at least one charactor for prompt message, and true or false value to specify whether echo back is enabled or not.

    The responses are listed in the same order as request prompts.

    None authentication (NOT recomended)

    The last one is none authentication. None authentication is usually NOT used.

    To define a none authentication, the HrrRbSsh::Authentication::Authenticator.new { |context| ... } block is used as well. When the block returns true, then the authentication succeeded as well. However, context variable behaves differently.

    auth_none = HrrRbSsh::Authentication::Authenticator.new { |context|
      if context.username == 'user1'
        true
      else
        false
      end
    }
    options['authentication_none_authenticator'] = auth_none

    In none authentication context, context variable provides the #username method.

    Multi-step authentication

    In this strategy that conbines single authentications, it is possible to implement multi-step authentication. In case that the combination is a publickey authentication method and a password authentication method, it is so-called two-factor authentication.

    A return value of each authentication handler can be HrrRbSsh::Authentication::PARTIAL_SUCCESS. The value means that the authentication method returns success and another authenticatoin method is requested (i.e. the authentication method is deleted from the list of authentication that can continue, and then the server sends USERAUTH_FAILURE message with the updated list of authentication that can continue and partial success true). When all preferred authentication methods returns PARTIAL_SUCCESS (i.e. there is no more authentication that can continue), then the user is treated as authenticated.

    auth_preferred_authentication_methods = ["publickey", "password"]
    auth_publickey = HrrRbSsh::Authentication::Authenticator.new { |context|
      is_verified = some_verification_method(context)
      if is_verified
        HrrRbSsh::Authentication::PARTIAL_SUCCESS
      else
        false
      end
    }
    auth_password = HrrRbSsh::Authentication::Authenticator.new { |context|
      is_verified = some_verification_method(context)
      if is_verified
        HrrRbSsh::Authentication::PARTIAL_SUCCESS
      else
        false
      end
    }
    options['authentication_preferred_authentication_methods'] = auth_preferred_authentication_methods
    options['authentication_publickey_authenticator'] = auth_publickey
    options['authentication_password_authenticator'] = auth_password
    More flexible authentication

    A context variable in an authenticator gives an access to remaining authentication methods that can continue. In this strategy, an implementer is able to control the order of authentication methods and to control which authentication methods are used for the user.

    The below is an example. It is expected that any user must be verified by publickey and then another authentication is requested for the user accordingly.

    auth_preferred_authentication_methods = ['none']
    auth_none = HrrRbSsh::Authentication::Authenticator.new{ |context|
      context.authentication_methods.push 'publickey'
      HrrRbSsh::Authentication::PARTIAL_SUCCESS
    }
    auth_publickey = HrrRbSsh::Authentication::Authenticator.new{ |context|
      if some_verification(context)
        case context.username
        when 'user1'
          context.authentiation_methods.push 'keyboard-interactive'
          HrrRbSsh::Authentication::PARTIAL_SUCCESS
        else
          false
        end
      else
        false
      end
    }
    auth_keyboard_interactive = HrrRbSsh::Authentication::Authenticator.new{ |context|
      if some_verification(context)
        true # or HrrRbSsh::Authentication::PARTIAL_SUCCESS; both will accept the user because remaining authentication method is only 'keyboard-interactive' in this case
      else
        false
      end
    }
    options['authentication_preferred_authentication_methods'] = auth_preferred_authentication_methods
    options['authentication_none_authenticator'] = auth_none
    options['authentication_publickey_authenticator'] = auth_publickey
    options['authentication_keyboard_interactive_authenticator'] = auth_keyboard_interactive

    Handling session channel requests

    By default, any channel requests belonging to session channel are implicitly ignored. To handle the requests, defining request handlers are required.

    Reference request handlers

    There are pre-implemented request handlers available for reference as below.

    options['connection_channel_request_pty_req']       = HrrRbSsh::Connection::RequestHandler::ReferencePtyReqRequestHandler.new
    options['connection_channel_request_env']           = HrrRbSsh::Connection::RequestHandler::ReferenceEnvRequestHandler.new
    options['connection_channel_request_shell']         = HrrRbSsh::Connection::RequestHandler::ReferenceShellRequestHandler.new
    options['connection_channel_request_exec']          = HrrRbSsh::Connection::RequestHandler::ReferenceExecRequestHandler.new
    options['connection_channel_request_window_change'] = HrrRbSsh::Connection::RequestHandler::ReferenceWindowChangeRequestHandler.new
    Custom request handlers

    It is also possible to define customized request handlers. For instance, echo server can be implemented very easily as below. In this case, echo server works instead of shell and PTY-req and env requests are undefined.

    conn_echo = HrrRbSsh::Connection::RequestHandler.new { |context|
      context.chain_proc { |chain|
        begin
          loop do
            buf = context.io[0].readpartial(10240)
            break if buf.include?(0x04.chr) # break if ^D
            context.io[1].write buf
          end
          exitstatus = 0
        rescue => e
          logger.error([e.backtrace[0], ": ", e.message, " (", e.class.to_s, ")\n\t", e.backtrace[1..-1].join("\n\t")].join)
          exitstatus = 1
        end
        exitstatus
      }
    }
    options['connection_channel_request_shell'] = conn_echo

    In HrrRbSsh::Connection::RequestHandler.new block, context variable basically provides the followings.

    • #io => [in, out, err] : in is readable and read data is sent by remote. out and err are writable. out is for standard output and written data is sent as channel data. err is for standard error and written data is sent as channel extended data.
    • #chain_proc => {|chain| ... } : When a session channel is opened, a background thread is started and is waitng for a chained block registered. This #chain_proc is used to define how to handle subsequent communications between local and remote. The chain variable provides #call_next method. In #proc_chain block, it is possible to call subsequent block that is defined in another request handler. For instance, shell request must called after pty-req request. The chain in pty-req request handler’s #chain_proc calls #next_proc and then subsequent shell request handler’s #chain_proc will be called.
    • #close_session : In most cases, input and output between a client and the server is handled in #chain_proc and closing the #chain_proc block will lead closing the underlying session channel. This means that to close the underlying session channel it is required to write at least one #chain_proc block. If it is not required to use #chain_proc block or is required to close the underlying session channel from outside of #chain_proc block, #close_session can be used. The #close_session will close the background thread that calls #chain_proc blocks.
    • #variables => Hash : A hash instance that is passed from authenticator and is shared in subsequent session channel request handlers
    • #vars : The same object that #variables returns

    And request handler’s context variable also provides additional methods based on request type. See lib/hrr_rb_ssh/connection/channel/channel_type/session/request_type/<request type>/context.rb.

    Defining preferred algorithms (optional)

    Preferred encryption, server-host-key, KEX and compression algorithms can be selected and defined.

    options['transport_preferred_encryption_algorithms']      = %w(aes256-ctr aes128-cbc)
    options['transport_preferred_server_host_key_algorithms'] = %w(ecdsa-sha2-nistp256 ssh-rsa)
    options['transport_preferred_kex_algorithms']             = %w(ecdh-sha2-nistp256 diffie-hellman-group14-sha1)
    options['transport_preferred_mac_algorithms']             = %w(hmac-sha2-256 hmac-sha1)
    options['transport_preferred_compression_algorithms']     = %w(none)

    Supported algorithms can be got with each algorithm class’s #list_supported method, and default preferred algorithms can be got with each algorithm class’s #list_preferred method.

    Outputs of #list_preferred method are ordered as preferred; i.e. the name listed at head is used as most preferred, and the name listed at tail is used as non-preferred.

    p HrrRbSsh::Transport::EncryptionAlgorithm.list_supported
    # => ["none", "3des-cbc", "blowfish-cbc", "aes128-cbc", "aes192-cbc", "aes256-cbc", "arcfour", "cast128-cbc", "aes128-ctr", "aes192-ctr", "aes256-ctr"]
    p HrrRbSsh::Transport::EncryptionAlgorithm.list_preferred
    # => ["aes128-ctr", "aes192-ctr", "aes256-ctr", "aes128-cbc", "3des-cbc", "blowfish-cbc", "cast128-cbc", "aes192-cbc", "aes256-cbc", "arcfour"]
    
    p HrrRbSsh::Transport::ServerHostKeyAlgorithm.list_supported
    # => ["ssh-dss", "ssh-rsa", "ecdsa-sha2-nistp256", "ecdsa-sha2-nistp384", "ecdsa-sha2-nistp521"]
    p HrrRbSsh::Transport::ServerHostKeyAlgorithm.list_preferred
    # => ["ecdsa-sha2-nistp521", "ecdsa-sha2-nistp384", "ecdsa-sha2-nistp256", "ssh-rsa", "ssh-dss"]
    
    p HrrRbSsh::Transport::KexAlgorithms.new.list_supported
    # => ["diffie-hellman-group1-sha1", "diffie-hellman-group14-sha1", "diffie-hellman-group-exchange-sha1", "diffie-hellman-group-exchange-sha256", "diffie-hellman-group14-sha256", "diffie-hellman-group15-sha512", "diffie-hellman-group16-sha512", "diffie-hellman-group17-sha512", "diffie-hellman-group18-sha512", "ecdh-sha2-nistp256", "ecdh-sha2-nistp384", "ecdh-sha2-nistp521"]
    p HrrRbSsh::Transport::KexAlgorithms.new.list_preferred
    # => ["ecdh-sha2-nistp521", "ecdh-sha2-nistp384", "ecdh-sha2-nistp256", "diffie-hellman-group18-sha512", "diffie-hellman-group17-sha512", "diffie-hellman-group16-sha512", "diffie-hellman-group15-sha512", "diffie-hellman-group14-sha256", "diffie-hellman-group-exchange-sha256", "diffie-hellman-group-exchange-sha1", "diffie-hellman-group14-sha1", "diffie-hellman-group1-sha1"]
    
    p HrrRbSsh::Transport::MacAlgorithm.list_supported
    # => ["none", "hmac-sha1", "hmac-sha1-96", "hmac-md5", "hmac-md5-96", "hmac-sha2-256", "hmac-sha2-512"]
    p HrrRbSsh::Transport::MacAlgorithm.list_preferred
    # => ["hmac-sha2-512", "hmac-sha2-256", "hmac-sha1", "hmac-md5", "hmac-sha1-96", "hmac-md5-96"]
    
    p HrrRbSsh::Transport::CompressionAlgorithm.list_supported
    # => ["none", "zlib"]
    p HrrRbSsh::Transport::CompressionAlgorithm.list_preferred
    # => ["none", "zlib"]

    Hiding and/or simulating local SSH version

    By default, hrr_rb_ssh sends SSH-2.0-HrrRbSsh-#{VERSION} string at initial negotiation with remote peer. To address security concerns, it is possible to replace the version string.

    # Hiding version
    options['local_version'] = "SSH-2.0-HrrRbSsh"
    
    # Simulating OpenSSH
    options['local_version'] = "SSH-2.0-OpenSSH_x.x"
    
    # Simulating OpenSSH and hiding version
    options['local_version'] = "SSH-2.0-OpenSSH"

    Please note that the beginning of the string must be SSH-2.0-. Otherwise SSH 2.0 remote peer cannot continue negotiation with the local peer.

    Writing SSH client (Experimental)

    Starting SSH connection

    The client mode can be started with HrrRbSsh::Client.start. The method takes target and options arguments. The target that the SSH client connects to can be one of:

    • (IO) An io that is open for input and output
    • (Array) An array of the target host address or host name and its service port number
    • (String) The target host address or host name; in this case the target service port number will be 22

    And the options contains various parameters for the SSH connection. At least username key must be set in the options. Also at least one of password, publickey, or keyboard-interactive needs to be set for authentication instead of authenticators that are used in server mode. Also as similar to server mode, it is possible to specify preferred transport algorithms and preferred authentication methods with the same keywords.

    target = ['remotehost', 22]
    options = {
      username: 'user1',
      password: 'password1',
      publickey: ['ssh-rsa', "/home/user1/.ssh/id_rsa")],
      authentication_preferred_authentication_methods = ['publickey', 'password'],
    }
    HrrRbSsh::Client.start(target, options) do |conn|
      # Do something here
      # For instance: conn.exec "command"
    end

    Executing remote commands

    There are some methods supported in client mode. The methods works as a receiver of conn block variable.

    exec method

    The exec and exec! methods execute command on a remote host. Both takes a command argument that is executed in the remote host. And they can take optional pty and env arguments. When pty: true is set, then the command will be executed on a pseudo-TTY. When env: {'key' => 'value'} is set, then the environmental variables are set before the command is executed.

    The exec! method returns [stdout, stderr] outputs. Once the command is executed and the outputs are completed, then the method returns the value.

    conn.exec! "command" # => [stdout, stderr]

    On the other hand, exec method takes block like the below example and returns exit status of the command. When the command is executed and the outputs and reading them are finished, then io_out and io_err return EOF.

    conn.exec "command" do |io_in, io_out, io_err|
      # Do something here
    end
    shell method

    The shell method provides a shell access on a remote host. As similar to exec method, it takes block and its block variable is also io_in, io_out, io_err. shell is always on pseudo-TTY, so it doesn’t take pty optional argument. It takes env optional argument. Exiting shell will leads io_out and io_err EOF.

    conn.shell do |io_in, io_out, io_err|
      # Do something here
    end
    subsystem method

    The subsystem method is to start a subsystem on a remote host. The method takes a subsystem name argument and a block. Its block variable is also io_in, io_out, io_err. subsystem doesn’t take pty nor env optional argument.

    conn.subsystem("echo") do |io_in, io_out, io_err|
      # Do something here
    end

    Demo

    The demo/server.rb shows a good example on how to use the hrr_rb_ssh library in SSH server mode. And the demo/client.rb shows an example on how to use the hrr_rb_ssh library in SSH client mode.

    Supported Features

    The following features are currently supported.

    Connection layer

    • Session channel
      • Shell (PTY-req, env, shell, window-change) request
      • Exec request
      • Subsystem request
    • Local port forwarding (direct-tcpip channel)
    • Remote port forwarding (tcpip-forward global request and forwarded-tcpip channel)

    Authentication layer

    • None authentication
    • Password authentication
    • Public key authentication
      • ssh-dss
      • ssh-rsa
      • ecdsa-sha2-nistp256
      • ecdsa-sha2-nistp384
      • ecdsa-sha2-nistp521
    • Keyboard interactive (generic interactive / challenge response) authentication

    Transport layer

    • Encryption algorithm
      • none
      • 3des-cbc
      • blowfish-cbc
      • aes128-cbc
      • aes192-cbc
      • aes256-cbc
      • arcfour
      • cast128-cbc
      • aes128-ctr
      • aes192-ctr
      • aes256-ctr
    • Server host key algorithm
      • ssh-dss
      • ssh-rsa
      • ecdsa-sha2-nistp256
      • ecdsa-sha2-nistp384
      • ecdsa-sha2-nistp521
    • Kex algorithm
      • diffie-hellman-group1-sha1
      • diffie-hellman-group14-sha1
      • diffie-hellman-group-exchange-sha1
      • diffie-hellman-group-exchange-sha256
      • diffie-hellman-group14-sha256
      • diffie-hellman-group15-sha512
      • diffie-hellman-group16-sha512
      • diffie-hellman-group17-sha512
      • diffie-hellman-group18-sha512
      • ecdh-sha2-nistp256
      • ecdh-sha2-nistp384
      • ecdh-sha2-nistp521
    • Mac algorithm
      • none
      • hmac-sha1
      • hmac-sha1-96
      • hmac-md5
      • hmac-md5-96
      • hmac-sha2-256
      • hmac-sha2-512
    • Compression algorithm
      • none
      • zlib

    Contributing

    Bug reports and pull requests are welcome on GitHub at https://github.com/hirura/hrr_rb_ssh. This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the Contributor Covenant code of conduct.

    Code of Conduct

    Everyone interacting in the HrrRbSsh project’s codebases, issue trackers, chat rooms and mailing lists is expected to follow the code of conduct.

    License

    The gem is available as open source under the terms of the Apache License 2.0.

    Visit original content creator repository
  • spreadsheet

    Spreadsheet

    TypeScript/javascript spreadsheet parser, with formulas. For a full list of formulas, see DOCS.md.

    Usage

    Install

    npm install js-spreadsheet
    

    Examples

    Using a Sheet

    var Sheet = require("js-spreadsheet").Sheet;
    var sheet = new Sheet();
    sheet.setCell("A1", "10");
    sheet.setCell("A2", "14");
    sheet.setCell("A4", "10e2");
    sheet.setCell("A5", "99.1");
    sheet.setCell("B1", "=SUM(A1:A5)");
    sheet.getCell("B1").getValue(); // returns: 1123.1

    Using Formulas Directly

    var Formulas = require("js-spreadsheet").AllFormulas;
    Formulas.SUM(1, 2, 3, [4, 5, 6], "7"); // returns: 28

    For a full list of formulas, see DOCS.md

    Nested Formulas

    sheet.setCell('A1', '=SIN(PI() / 2)')
    sheet.getCell("A1").getValue(); // returns: 1

    Date Conversion

    sheet.setCell('A1', '=DATEDIF("1992-6-19", "1996-6-19", "Y")')
    sheet.getCell("A1").getValue(); // returns: 4

    Number Parsing

    sheet.setCell('A1', '="10e1" + 44');
    sheet.getCell("A1").getValue(); // returns: 144
    
    sheet.setCell('A2', '="1,000,000" + 1');
    sheet.getCell("A2").getValue(); // returns: 1000001
    
    sheet.setCell('A3', '="-$10.00" + 0');
    sheet.getCell("A3").getValue(); // returns: -10
    
    sheet.setCell('A4', '=10% + 1');
    sheet.getCell("A4").getValue(); // returns: 1.1
    
    sheet.setCell('A5', '= 2 ^ 10');
    sheet.getCell("A5").getValue(); // returns: 1024

    Ranges

    In MS Excel, and Google Spreadsheets, literal ranges are denoted with opening and closing curly-brackets. E.g. “{1, 2, 3}”. In this implementation however, literal ranges are denoted with opening and closing brackets. E.g. “[1, 2, 3]”.

    // OK
    sheet.setCell('A1', '=SUM([1, 2, 3])');
    // NOT OK
    sheet.setCell('A1', '=SUM({1, 2, 3})');

    Docs

    See DOCS.md for full list and documentation of all formulas available.

    Contributing

    When adding a formula, or fixing a bug please follow the commit message format:

    [BUG_FEATURE_FILE_OR_COMPONENT] short description here of issue and fix
    

    If you’re adding a new formula, before you submit a pull request or push ensure that:

    1. The formula is tested inside the proper category file in tests/Formulas.
    2. Make sure the formula is exported, and imported/exported in AllFormulas.ts.
    3. The formula tests for reference errors, N/A errors, value errors for each input.
    4. That the formula is tested for parsing inside SheetFormulaTest.ts.
    5. Run tests with npm run test.
    6. Build with npm run build.
    7. Build DOCS.md with npm run docs.

    Why?

    Near the end of 2016 I began to ask myself why I didn’t know more about MS Excel and Google Spreadsheets. Why didn’t I know more about the most popular programing language in the world? I began to reverse engineer Google Spreadsheets in particular, gaining a better understanding along the way.

    I chose TypeScript because, coming from Java, it is really nice to be able to see type errors, and catch them. I also just enjoy getting specific with my return types, even if the specifications for a spreadsheet treat type flexibly.

    For the formula documentation, I tried to be at least — if not more — thorough as Google Spreadsheets.

    License

    For this repository’s code license, and related licenses, see LICENSES directory.

    Acknowledgements

    This is largely a re-write of Handsontable‘s https://github.com/handsontable/ruleJS, and https://github.com/sutoiku/formula.js/. The parser was derived from Handsontable’s, and many of the formulas were created with FormulaJS’s formulas as a reference point.

    Visit original content creator repository

  • Software-Solutions-for-Reproducible-ML-Experiments

    Software Solutions for Reproducible ML Experiments

    This repository contains auxiliary material for the article: “A Taxonomy of Tools for Reproducible Machine Learning Experiments” by Luigi Quaranta, Fabio Calefato, and Filippo Lanubile.

    In the following of this README, the full sample of analyzed tools is classified according to the features from the taxonomy presented in the paper; for the reader’s convenience, a figure representing the taxonomy is also displayed in the following paragraph.

    Creative Commons License
    The tool categorization reported in this README as well as the figure representing the taxonomy are licensed under a Creative Commons Attribution 4.0 International License.

    Please, include the following citation if you intend to (re)use our work:

    L. Quaranta, F. Calefato and F. Lanubile, “A Taxonomy of Tools for Reproducible Machine Learning Experiments,” Proceedings of the AIxIA 2021 Discussion Papers Workshop (AIxIA DP 2021), 2021, pp. 65-76, online: CEUR-WS.org/Vol-3078/paper-81.pdf.

    The Taxonomy

    Taxonomy

    Tools Review

    General

    The tool sample classified according to the features of the General category.

    Interaction Mode Workflow Coverage Languages License
    DVC CLI All Language agnostic FLOSS
    (Apache 2.0)
    Guild AI CLI, API Data Preparation + Model Building Python
    Built-in framework support: TensorFlow, PyTorch, Keras, Scikit-Learn
    FLOSS
    (Apache 2.0)
    Pachyderm CLI, API All Language agnostic Community Ed.:
    FLOSS
    (Apache 2.0)
    Enterprise Ed.:
    Proprietary
    Comet.ml API, CLI Data Preparation + Model Building Python, R, Java (beta)
    Built-in framework support: TensorFlow, PyTorch, Keras, Scikit-Learn, SageMaker
    Proprietary
    MLflow API, CLI All Python, R, Java
    Built-in framework support: Apache Spark, TensorFlow, PyTorch, Keras, Scikit-Learn, H2O
    FLOSS
    (Apache 2.0)
    Neptune API, CLI All Language agnostic (CLI)
    Python and R (API)
    Built-in framework support: TensorFlow, PyTorch, Keras
    MLflow, SageMaker
    Proprietary
    wandb API, CLI Data Preparation + Model Building Python Proprietary
    Valohai CLI, API All Language agnostic Proprietary
    Google Colab Cloud IDE Data Preparation + Model Building Python Proprietary
    FloydHub Cloud IDE, API, CLI All Python
    Built-in framework support: TensorFlow, PyTorch, Keras, Scikit-Learn
    Proprietary
    Domino Cloud IDE, API, CLI All Python, R, Julia
    Built-in framework support: TensorFlow, PyTorch, H2O, Apache Spark, Hadoop
    Proprietary
    Spell.run Cloud IDE, CLI All Python
    Built-in framework support: TensorFlow, Keras
    Weights & Biases
    Proprietary
    Polynote Web-based IDE Data Preparation + Model Building Scala, Python, SQL
    Built-in framework support: Apache Spark
    FLOSS
    (Apache 2.0)
    DataRobot AutoML Platform All Language agnostic
    (Python API)
    Proprietary
    databricks Cloud IDE, API, CLI All Python, R, Scala, SQL
    Built-in framework support: Apache Spark, MLflow, Delta Lake, TensorFlow
    Proprietary
    Driverless AI AutoML Platform All (Python recipes) Proprietary
    RapidMiner AutoML Platform All (Python and R for
    custom code)
    Proprietary
    dstack.ai API Data Preparation Python, R Proprietary
    Dotscience Cloud IDE, API, CLI All Language agnostic (CLI)
    Python (Cloud IDE, API)
    Proprietary

    Analysis Support

    The tool sample classified according to the features of the Analysis Support category.

    Notebook support Data Visualization Web Dashboard Collaboration mode Computational
    Resources
    DVC No No No Async
    (push/pull commands)
    Local
    Guild AI Yes
    (on-premise)
    No Yes
    (local)
    Async
    (push/pull commands)
    Local
    Pachyderm Yes
    (on-premise)
    No Yes
    (local or remote)
    Async
    (push/pull commands)
    Local +
    On-premise +
    Remote (in-house*)
    Comet.ml Yes
    (on-premise)
    No Yes
    (remote)
    No Local +
    On-premise* +
    Remote*
    (in-house)
    MLflow Yes
    (on-premise)
    No Yes
    (local)
    No Local +
    On-premise
    Neptune Yes
    (on-premise)
    No Yes
    (remote)
    Async (comments) On-premise* +
    Remote (in-house)
    wandb Yes
    (on-premise)
    No Yes
    (remote)
    No On-premise* +Remote
    (in-house)
    Valohai Yes
    (on-premise orhosted)
    No Yes
    (remote)
    No On-premise* +
    Remote (in-house)
    Google Colab Yes
    (hosted)
    No No Sync (co-editing) +
    Async (comments)
    Local +
    Remote (in-house or third-party)
    FloydHub Yes
    (hosted)
    No Yes
    (remote)
    No On-premise* +
    Remote (in-house)
    Domino Yes
    (hosted)
    No Yes
    (remote)
    Async (reviews) Remote (in-house*)
    Spell.run Yes
    (hosted)
    No Yes
    (remote)
    No On-premise* +
    Remote (in-house)
    Polynote Yes
    (on-premise)
    Yes No No Local
    DataRobot No Yes Yes
    (remote)
    No On-premise* +
    Remote*
    (in-house or
    third-party)
    databricks Yes
    (hosted)
    Yes Yes
    (remote)
    Sync (co-editing) +
    Async (comments)
    Remote* (third-party)
    Driverless AI No Yes Yes
    (remote)
    No Remote* (in-house or third-party)
    RapidMiner Yes
    (hosted)
    Yes Yes
    (remote)
    No Local +
    Remote* (in-house or third-party)
    dstack.ai Yes
    (on-premise)
    No Yes
    (remote)
    Async (comments) On-premise* +
    Remote
    (in-house)
    Dotscience Yes
    (hosted)
    No Yes
    (remote)
    Async
    (Fork&Pull for notebooks)
    On-premise* +
    Remote (in-house or third-party*)

    Reproducibility Support

    The tool sample classified according to the features of the Reproducibility Support category.

    Code Versioning Data Access Data Versioning Experiment
    Logging
    Reproducible
    Pipeline
    DVC Yes
    (external, git-based)
    Local +Remote (third-party) Yes Yes
    (manual)
    Yes
    (automatic)
    Guild AI Yes
    (external, git-based)
    Local +Remote (third-party) Yes Yes
    (hybrid)
    Yes
    (configuration file)
    Pachyderm Yes
    (integrated)
    Local +Remote (third-party) Yes No Yes
    Comet.ml Yes
    (external, git-based)
    Local +
    Remote (internal)
    Yes Yes
    (hybrid)
    ?
    MLflow Yes
    (external, git-based)
    Local +
    Remote (third-party)
    No Yes
    (hybrid)
    Yes
    (configuration file)
    Neptune Yes
    (integrated orexternal, git-based)
    Local +
    Remote (third-party)
    No Yes
    (hybrid)
    No
    wandb Yes
    (external, git-based)
    Local +
    Remote (internal orthird-party)
    No Yes
    (hybrid)
    Local +
    Remote (third-party)
    Valohai Yes
    (integrated or
    external, git-based)
    Local +
    Remote (third-party*)
    Yes Yes
    (manual)
    Yes
    (configuration file)
    Google Colab Yes
    (file-sharing services – Google Drive)
    Remote (internal orthird-party) Yes No No
    FloydHub Yes (integrated orexternal, git-based) Remote (internal orthird-party) Yes Yes
    (manual)
    Yes
    Domino Yes
    (integrated)
    Remote (internal orthird-party) Yes No Yes
    (automatic)
    Spell.run Yes
    (external, git-based)
    Remote (internal orthird-party) ? Yes
    (hybrid)
    Yes
    (script)
    Polynote No Local No No No
    DataRobot ? Remote ? Yes
    (automatic)
    Yes
    (built-in)
    databricks Yes (integrated orexternal, git-based) Remote (internal orthird-party) Yes Yes
    (hybrid)
    ?
    Driverless AI Yes
    (integrated)
    Remote
    (internal or third-party)
    Yes Yes
    (automatic)
    Yes
    (built-in)
    RapidMiner Yes
    (external, git-based)
    Local +
    Remote (third-party)
    ? Yes
    (automatic)
    Yes
    (visual or built-in)
    dstack.ai No Local +
    Remote (internal)
    Yes Yes
    (manual)
    No
    Dotscience Yes
    (integrated)
    Remote
    (internal or third-party)
    Yes Yes
    (manual)
    Yes
    (automatic)

    * = only available in paid plans

    N.B.: Rows related to Dotscience are strike-through because the service seems to be shutting down. We read this blog post a few days after our trial.


    Repository contents

    The tools/ folder contains environment templates for the tools that require a local installation to be executed. To try the tools we used — where possible — a realistic case study inspired to the lessons of the Kaggle’s micro-courses “Intro to Machine Learning” and “Intermediate Machine Learning”. The kernels/ folder contains template notebooks implementing the case study, while the sample dataset is stored in the input/ folder.

    Setup instructions

    To try one of the reviewed tools, follow these steps:

    1. go to the tool’s folder: /tools/<tool_name>;
    2. if a .env_template file exist, make a copy of it; give the name .env to the copy; edit .env giving a value to each of the mentioned variables.
    3. if a README.md file is present, follow the specific instruction there.
    Visit original content creator repository
  • psql-rosetta

    Btrieve / Pervasive.SQL / ZEN : Rosetta-code example repository project

    Idea

    Provide documented example code for all database access methods supported by Pervasive.SQL on all platforms using all popular languages. Preferably useful for both beginner and advanced user as a reference guide.

    Name

    See:

    Background

    For many years it struck me that code/coding examples were scarce. Also they varied over time (platforms, languages supported), but most of all stuck in time. Not very appealing for a starter, whether (s)he would be new to a programming language or to Pervasive.SQL.
    Over the years I developed ideas on how to improve this and made some efforts writing code.
    The task ahead is quite extensive. Especially if one wants to do a proper job.
    Ideas change, new projects or tasks got in between, etc. Long story short it took some time and the result is very different than at first anticipated as my first idea was to write a single reference application which could later be ported to other languages/platforms.

    Layout

    Based on the paragraph Database Access Methods in the Actian Pervasive.SQL V13 online documentation I created a Bash shellscript (mk_dirs.sh), taking a single argument being the programminglanguage name, which creates a directory structure listing all the database access methods as subdirectories. By using this script I was forced to look into and document all(?!) possibilities regardless how odd. All subdirectories contain their own markdown ReadMe file describing the (im)possibilities and code if provided.
    All programminglanguages have a ReadMe markdown file in their root directory describing the ins and outs, what is and isn’t implemented as well as a Results markdown file to register what has been tested on which platform.

    Missing files versus Copyright

    The goals was not to infringe any copyrights, so headers must be copied from SDKs which can be downloaded from the Actian website. The same goes for example code which can be copy/pasted from the website. It would be great if example code (& headers) could be made available from a repository.
    When looking around on Github one can find copyrighted header files. I leave it to Actian to add them.

    Improvements

    I very much welcome improvements, comments and other contributions.
    Personally I can think of a view:

    • All code should confirm to coding standards.
    • Refactoring/cleanup of code.
    • All code should be very rich in comments. Annotate all database calls.
    • All code should be made very defensive: if an error occurs it should be reported or at least logged.
    • All code should be properly tested. Preferably on all relevant platforms. Which on turn should be documented.
    • Code must be written or adapted for other platforms. Notably: Mac, IOS, Android
    • Some obvious languages/platforms are missing. Notably: Win/VS: c#, VB.net, Win/Embarcadero C++, Win/MingW or other GNU C/C++, IOS/Objective C, Android/Java, Mac/making the bash-shell scripts compatible/supportive.
    • Also some languages which used to be supported/were important do not have sample code yet. What springs to mind: Cobol, Delphi, … ? And some are no longer important: (Visual) Basic (pre .net), Pascal, Turboc (DOS), Watcom C/C++ (DOS)
    • Some ‘languages’ are not very demonstratable as they seem to require severe boilerplating, project management and/or integration in an IDE. ASP.NET being an example.
    • Integrated platforms are not listed. For example Magic It probably makes no sense in listing them. Other platforms used in the past: Clarion and Power Builder
    • Another subject which requires attention is web-based development. One can think of: Windows/ASP, Python/Flask, Python/Django, Ruby/Ruby-on-rails and Javascript, NodeJS. Optionally expanded by new kids on the block such as Dart/Flutter, Meteor, etc. although a lot of them are based on Javascript.
    • Drivers. Currently especially one springs to mind: SQLAlchemy-Pervasive : it needs some serious TLC.
    • Currently a strong focus is on database connectivity.
      Ultimately an application supporting commandline, curses (TUI), GUI while using all calls available in APIs (Btrieve, Btrieve2, ODBC, JDBC) would be a real bonus. It would cater for demoing, illustrating how calls should be used and obviously would provide a great test, especially if the code could be run using test automation.
      This would be a major thing to design and implement properly. Some baby steps in this process alone would be great.

    I am fully aware that most code does not comply to above standards. Refactoring all code would take a lot of time which would pospone the initial release or maybe even prevent it.
    For this reason I am releasing code which does not meet my views on proper coding.

    Credits

    See the Credits.md file. This file applies to the entire project.

    License

    See the License.md file. This file applies to the entire project.

    Warnings

    For sake of completeness and uniformity all access methods mentioned in the programmers manual are listed as options for all languages. The combinations can be quite absurd or exotic. Obviously especially those are not implemented (yet) and/or properly tested.
    All code and documentation in this repository is provided as is.
    By no means I am an expert in all languages provided. The goal is to at least deliver working code which is a very low standard, but not uncommon unfortunately. Writing about programming versus Software Engineering can fill up bookshelves. Lets no go there now.
    Hopefully the quality of code will increase over time if people being expert in a certain language participate and improve code.
    Most code is tested on Linux only unless stated otherwise. To improve maturity and clearity on this subject test result tables have been added.

    Visit original content creator repository

  • insonmnia

    inSONMnia

    Build Status

    It’s an early alpha version of the platform for SONM.io project.

    For now it has lots of unfinished task. The main idea is to show that such platform can be implemented and to chose a techstack for future implementation.

    What is it here?

    This repository contains code for Hub, Miner and CLI.

    Where can I get it?

    A docker container contained every CLI, Miner, Hub can be found on public DockerHub: sonm/insonmnia

    docker pull sonm/insonmnia

    If you want it’s easy to build all the components. You need golang > 1.8:

    make build

    Also there is a Dockerfile to build a container:

    docker build .

    Roadmap

    Look at milestone https://github.com/sonm-io/insonmnia/milestones

    How to run

    Hub

    To start a hub it’s needed to expose a couple of ports.
    10001 handles gRCP requests from CLI
    10002 is used to handle communication with miners

    docker run --rm -p 10002:10002 -p 10001:10001  sonm/insonmnia sonmhub

    Miner

    To run Miner from the container you need to pass docker socket inside and specify IP of the HUB

      docker run --net host -e DOCKER_API_VERSION=1.24 -v /run:/var/run sonm/insonmnia:alpha3 sonmminer -h <hubip:10002>

    CLI commands

    CLI sends commands to a hub. A hub must be pointed via –hub=hubip:port. Port is usually 10001.

    ping

    Just check that a hub is reachable and alive.

    sonmcli --hub <hubip:10001> ping
    OK

    list

    List shows a list of miners connected to a hub with tasks assigned to them.

    NOTE: later each miner will have a unique signed ID instead of host:port

    sonmcli --hub <hubip:port> list
    Connected Miners
    {
    	"<minerip:port": {
    		"values": [
    			"2b845fcc-143a-400b-92c7-aac2867ab62f",
    			"412dd411-96df-442a-a397-6a2eba9147f9"
    		]
    	}
    }

    start a container

    To start a container you have to pick a hub and miner connected to that hub.
    You can pick a miner from output of List command. See above.

    ./sonmcli --hub <hubip:port> --timeout=3000s  start --image schturmfogel/sonm-q3:alpha  --miner=<minerhost:port>

    The result would look like:

    ID <jobid>, Endpoint [27960/tcp-><ip:port> 27960/udp-><ip:port>]
    
    • jobid is an unique name for the task. Later it can be used to specify a task for various operations.
    • Endpoint describes mapping of exposed ports (google for Docker EXPOSE) to the real ports of a miner

    NOTE: later STUN will be used for UDP packets and LVS (ipvs) or userspace proxy (like SSH tunnel) for TCP. Miners who have a public IPv4 or can be reached via IPv6 would not need this proxy. The proxy is intended to get through NAT.

    stop a container

    To stop the task just provide the jobid

    sonmcli --hub <hubip:port> stop <jobid>

    How to cook a container

    Dockerfile for the image should follow several requirements:

    • ENTRYPOINT or CMD or both must present
    • Network ports should be specified via EXPOSE

    Technical overview

    Technologies we use right now:

    • golang is the main language. Athough golang has disadvantages, we believe that its model is good for fast start and the code is easy to understand. The simplicity leads to less errors. Also it makes easy to contribute to a project as a review process is very clean.
    • Docker is a heart of an isolation in our platform. We rely on security features (It’s not 100% safe!), metrics, ecosystem it provides. The cool thing Docker is supported by many platforms. Also Docker works a lot on a unikernel approach for container based applications, which opens a huge field for security and portability improvements.
    • whisper as a discovery protocol
    • Until the epoch of IPv6 begins we should bring a way to get through NAT. The solution depends on a concrete transport layer. For example, different approaches should be used for UDP (e.g. STUN) and TCP (naive userspace proxy). Each approach has its own overhead and the best fit solution depends on a task.
    • gRPC is an API protocol between components. It’s very easy to extend, supports traffic compression, flexible auth model, supported by many language. It’s becoming more and more popular as a technology for RPC.

    Hub

    Hub provides public gRPC-based API. proto files can be found in proto dir.

    Miner

    Miner is expected to discover a Hub using Whisper. Later the miner connects to the hub via TCP. Right now a Miner must have a public IP address. Hub sends orders to the miner via gRPC on top of the connection. Hub pings the miner from time to time.

    Visit original content creator repository

  • kubernetes-learning-path

    Learn Kubernetes from scratch (Beginner to Advanced level)

    🎖️ Credits: DevOpsCude

    💚 Thanks for the Beautiful article from Bibin ✍️ | Orginal Blog/Post

    k8s

    The Kubernetes Learning Roadmap is constantly updated with new content, so you can be sure that you’re getting the latest and most up-to-date information available.

    k8s-roadmap

    Table of Contents


    Additional New Section 2024 – Table of Contents

    1. Advanced Kubernetes Networking
    2. Kubernetes Observability and Monitoring
    3. Advanced Cluster Management and Maintenance
    4. Security Best Practices
    5. Application Deployment Strategies
    6. Troubleshooting Kubernetes Clusters
    7. Additional Resources

    Kubernetes Learning Roadmap

    Learning Kubernetes can seem overwhelming. It’s a complex container orchestration system, that has a steep learning curve. But with the right roadmap and understanding of the foundational concepts, it’s something that any developer or ops person can learn.

    In this Kubernetes learning roadmap, I have added prerequisites and complete Kubernetes learning path covering basic to advanced Kubernetes concepts.

    Kubernetes Learning Prerequisites

    Before jumping into learning kubernetes, you need to have a fair amount of knowledge of some of the underlying technologies and concepts.

    1. Distributed system: Learn about distributed system basics & their use cases in modern IT infrastructure. CAP theorem is good to have knowledge.

    2. Authentication & Authorization: A very basic concept in IT. However, engineers starting their careers tend to get confused. So please get a good understanding of learning from analogies. You will quite often see these terms in Kubernetes.

    3. Key Value Store: It is a type of NoSQL Database. Understand just enough basics and their use cases.

    4. API: Kubernetes is an API-driven system. So you need to have an understanding of RESTFUL APIs. Also, try to understand gRPC API. It’s good to have knowledge.

    5. YAML: YAML stands for YAML Ain’t Markup Language. It is a data serialization language that can be used for data storage and configuration files. It’s very easy to learn and from a Kubernetes standpoint, we will use it for configuration files. So understanding YAML syntax is very important.

    6. Container: Container is the basic building block of kubernetes.The primary work of Kubernetes is to orchestrate containers. You need to learn all the container basics and have hands-on experience working on container tools like Docker or Podman. I would also suggest reading about Open container initiative and Container Runtime Interface (CRI)

    7. Service Discovery: It is one of the key areas of Kubernetes. You need to have basic knowledge of client-side and server-side service discovery. To put it simply, in client-side service discovery, the request goes to a service registry to get the endpoints available for backend services. In server-side service discovery, the request goes to a load balancer and the load balancer uses the service registry to get the ending of backend services.

    8. Networking Basis

      • CIDR Notation & Type of IP Addresses
      • L3, L4 & L7 Layers (OSI Layers)
      • SSL/TLS: One way & Mutual TLS
      • Proxy
      • DNS
      • IPTables
      • IPVS
      • Software Defined Networking (SDN)
      • Virtual Interfaces
      • Overlay networking

    prerequisites

    Learn Kubernetes Architecture

    Understanding Kubernetes architecture is not an easy task. The system has many moving parts that need to be understood in order for you to get a grip on what’s happening beneath the surface. While learning architecture, you will come across the concepts we discuss in the prerequisites.

    As Kubernetes is a complex system, trying to understand the core architecture could be a little overwhelming for DevOps Engineers. As you get more hands-on experience, you would be able to understand the core architecture better.

    Here is my suggestion. Learn the high-level architecture and key components involved in Kubernetes. If you are not able to grasp the concept, either you can spend time and do more research on a specific topic or you can learn the concept while doing hands-on. It’s your choice.

    Check out the Kubernetes Architecture guide to learn about all the Kubernetes components in detail.

    Overall you need to learn the following:

    1. Control plane components: Understand the role of each component like API server, etcd, Scheduler, and Controller manager.

    2. Worker node components: Learn about Kube Proxy, Kubelet, Container Runtime

    3. Addon Components: CoreDNS, Network plugins (Calico, weave, etc), Metric Server

    4. Cluster high availability: Most organizations use managed Kubernetes services (GKE, EKS, AKS, etc). So the cloud provider takes care of the cluster’s control plane’s high availability. However, it is very important to learn the high availability concepts in scaling the cluster in multi zones and regions. It will help you in real-time projects and devops interviews.

    5. Network Design: While it is easy to set up a cluster in an open network without restrictions, it is not that easy in a corporate network. As a DevOps engineer, you should understand the Kubernetes network design and requirements so that you can collaborate with the network team better. For example, When I was working with kubernetes setup on Google cloud, we used a CIDR pod range that was not routable in the corporate network. As a workaround, we had to deploy IP masquerading for the pod network.

    $1000+ Free Cloud Credits to Launch Clusters

    Deploying big clusters on the cloud could be expensive. So make use of the following cloud credits and learn to launch clusters as if you would on a real-time project. All platforms offer managed k8s services.

    1. GKE (Google Cloud – $300 free credits)
    2. EKS (AWS – $300 free POC credits)
    3. DO Kubernetes (Digital Ocean – $200 free credits)
    4. Linode Kubernetes Engine (Linode Cloud – $100 Free credits)
    5. Vultr Kubernetes Engine (Vultr Cloud – $250 Free Credits)

    Use one account at a time. Once the credits are expired. move to the next account. You need to keep a watch on your credits as well as expiry. Or else you could get charged. Also, check the terms and instance usage limits if any.

    Also, setting up servers on this platform is very easy and every cloud provider had extensive documentation to get started.


    The Best Resources to Learn Kubernetes Online

    Here are some of the best online resources to learn Kubernetes practically:

    1️. The Official Kubernetes Basics Tutorial

    The official Kubernetes website offers browser-based, hands-on tutorials powered by Katacoda scenarios. It covers:

    • Kubernetes Basics
    • Configurations
    • Stateless & Stateful Application Deployment
    • Services & Networking
    • Security & Access Control

    image

    🔹 You can also explore the official Kubernetes tasks for hands-on experience with real-world Kubernetes implementations. This will also help in preparing for Kubernetes certifications.

    2️. DevOpsCube Kubernetes Tutorials

    The DevOpsCube Kubernetes Tutorials provide 35+ hands-on guides covering:

    • Kubernetes Architecture
    • Cluster Setup & Deployments
    • Best Practices
    • Package & Secret Management
    • Monitoring & Logging

    3️. KillerCoda Interactive Tutorials

    For a fully interactive browser-based learning experience, KillerCoda offers scenario-based Kubernetes playgrounds, where you can practice commands and learn in real-time.

    image

    Learn Kubernetes Cluster Setup & Administration

    Kubernetes Cluster Setup

    As DevOps engineers, it is very important to learn every component and cluster configuration. While there are many options to deploy a Kubernetes cluster, It is always better to learn to deploy multi-node clusters from scratch.

    With multi-node clusters, you can learn about all the concepts like Cluster security, High Availability, Scaling, Networking, etc.

    It gives you the feeling of working on a real-world project. It will also help you in interviews and you can be confident about production-level cluster configurations.

    Following are my cluster setup suggestions.

    1. Kubernetes the Hard Way: I would suggest you start with Kubernetes the hard way set up. It helps you understand all the configurations involved in bootstrapping a kubernetes cluster. If you want to work on production clusters, this lab will help you a lot. The setup is based on google cloud. You can use the $300 free credits to complete the lab.

    2. Kubeadm Cluster Setup: Learning kubeadm cluster setup helps you in Kubernetes certification preparation. Also, it helps you automate Kubernetes cluster setup with best practices.

    3. Minikube: If you want to have a minimal development cluster setup, minikube is the best option.

    4. Kind: Kind is another local development Kubernetes cluster setup.

    5. Vagrant Automated Kubernetes: If you prefer to have a multi-VM-based local Kubernetes cluster setup, you can try the automated vagrant setup that uses Kubeadm to bootstrap the cluster.

    Learn About Cluster Configurations

    Once you have a working cluster, you should learn about the key cluster configurations. This knowledge will be particularly helpful when working in a self-hosted Kubernetes setup.

    Even if you use a managed Kubernetes cluster for your project, there may be certain cluster configurations that you need to modify.

    For example, if you set up a cluster in a hybrid network, you may need to configure it with an on-premises private DNS server for private DNS resolution. This can be done via CoreDNS configuration.

    Also, having a solid understanding of cluster configurations will help you with Kubernetes certifications (CKA & CKS) where you need to troubleshoot cluster misconfiguration and issues.

    Understand KubeConfig File

    Kubeconfig file is a YAML file that contains all the cluster information and credentials to connect to the cluster.

    As a Devops Engineer, You should learn to connect to kubernetes clusters in different ways using the Kubeconfig file. Because you will be responsible for setting up cluster authentication for CI/CD systems, providing cluster access to developers, etc.

    So spend some time, understanding the Kubeconfig file structure and associated parameters.

    Understand Kubernetes Objects And Resources

    You will quite often come across the names “Kubernetes Object” and “Kubernetes Resource“

    First, you need to Understand the difference between an object and a resource in kubernetes.

    To put it simply, anything a user creates and persists in Kubernetes is an object. For example, a namespace, pod, Deployment configmap, Secret, etc.

    Before creating an object, you represent it in a YAML or JSON format. It is called an Object Specification (Spec). You declare the desired state of the object on the Object Spec. Once the object is created, you can retrieve its details from the Kubernetes API using Kubectl or client libraries.

    As we discussed earlier in the prerequisite section, everything in Kubernetes is an API. To create different object types, there are API endpoints provided by the Kubernetes API server. Those object-specific api-endpoints are called resources. For example, an endpoint to create a pod is called a pod resource.

    So when you try to create a Kubernetes Object using Kubectl, it converts the YAML spec to JSON format and sends it to the Pod resource (Pod API endpoint).

    Learn About Pod & Associated Resources

    Once you have an understanding of Kubernetes Objects and resources, you can start with a native Kubernetes object called Pod. A pod is a basic building block of Kubernetes.

    You should learn all the Pod concepts and their associated objects like Service, Ingress, Persistent Volume, Configmap, and Secret. Once you know everything about a pod, it is very easy to learn other pod-dependent objects like deployments, Daemonset, etc.

    First, learn about the Pod Resource Definition (YAML). A typical Pod YAML contains the following high-level constructs.

    • Kind
    • Metadata
    • Annotations
    • Labels
    • Selectors

    Once you have a basic understanding of the above, move on to hands-on learning. These concepts will make more sense when you do hands-on.

    Following are the hands-on tasks to learn about Pod and its associated objects.

    1. Deploy a pod
    2. Deploy pod on the specific worker node
    3. Add service to pod
    4. Expose the pod Service using Nodeport
    5. Expose the Pod Service using Ingress
    6. Setup Pod resources & limits
    7. Setup Pod with startup, liveness, and readiness probes.
    8. Add Persistent Volume to the pod.
    9. Attach configmap to pod
    10. Add Secret to pod
    11. multi-container pods (sidecar container pattern)
    12. Init containers
    13. Ephemeral containers
    14. Static Pods
    15. Learn to troubleshoot Pods

    Few advanced pod scheduling concepts.

    1. Pod Preemption & Priority
    2. Pod Disruption Budget
    3. Pod Placement Using a Node Selector
    4. Pod Affinity and Anti-affinity
    5. Container Life Cycle Hooks

    Learn About Pod Dependent Objects

    Now that you have a better understanding of Pod and independent kubernetes resources, you can start learning about objects that are dependent on the Pod object. While learning this, you will come across concepts like HPA (Horizontal Pod Autoscaling) and VPA (Verification Pod Autoscaling)

    1. Replicaset
    2. Deployment
    3. Daemonsets
    4. Statefulset
    5. Jobs & Cronjobs

    Deploy End to End Application on Kubernetes

    Once you understand the basics of these objects, you can try deploying an end-to-end microservices application on Kubernetes. Start with simple use cases and gradually increase complexity.

    I would suggest you get a domain name and try setting up a microservice application from scratch and host it on your domain.

    You don’t need to develop an application for this. Choose any open-source microservice-based application and deploy it. My suggestion is to choose the open-source pet clinic microservice application based on spring boot.

    Following are the high-level tasks.

    1. Build Docker images for all the services. Ensure you optimize the Dockerfile to reduce the Docker Image size.
    2. Create manifests for all the services. (Deployment, Statefulset, Services, Configmaps, Secrets, etc)
    3. Expose the front end with service type ClusterIp
    4. Deploy Nginx Ingress controller and expose it with service type Loadbalancer
    5. Map the load balancer IP to the domain name.
    6. Create an ingress object with a DNS name with the backend as a front-end service name.
    7. Validate the application.

    Learn About Securing Kubernetes Cluster

    Security is a key aspect of Kubernetes. There are many ways to implement security best practices in Kubernetes starting from building a secure container image.

    Following the native ways of implementing security in kubernetes.

    1. Service account
    2. Pod Security Context
    3. Seccomp & AppArmor
    4. Role Based Access Control (RBAC)
    5. Attribute-based access control (ABAC)
    6. Network Policies

    The following are the open-source tools you need to look at.

    1. Open Policy Agent
    2. Kyverno
    3. Kube-bench
    4. Kube-hunter
    5. Falco

    Learn About Kubernetes Operator Pattern

    Kubernetes Operators is an advanced concept.

    To understand operators, first, you need to learn the following Kubernetes concepts.

    1. Custom resource definitions
    2. Admission controllers
    3. Validating & Mutating Webhooks

    To get started with operators, you can try setting the following operators on Kubernetes.

    1. Prometheus Operator
    2. MySQL Operator

    If you are a Go developer or you want to learn to extend/customize kubernetes, I would suggest you create your own operator using Golang.

    Learn Important Kubernetes Configurations

    While learning kubernetes, you might use a cluster in open network connectivity. So most of the tasks get executed without any issues. However, it is not the case with clusters set up on corporate networks.

    So following are the some of the custom cluster configurations you should be aware of.

    1. Custom DNS server
    2. Custom image registry
    3. Shipping logs to external logging systems
    4. Kubernetes OpenID Connect
    5. Segregating & securing Nodes for PCI & PII Workloads

    Learn Kubernetes Production Best Practices

    Following are the resources that might help and add value to the Kubernetes learning process in terms of best practices.

    1. 12 Factor Apps: It is a methodology that talks about how to code, deploy and maintain modern microservices-based applications. Since Kubernetes is a cloud-native microservices platform, it is a must-know concept for DevOps engineers. So when you work on a real-time kubernetes project, you can implement these 12-factor principles.

    2. Kubernetes Failure Stories: Kubernetes failure stories is a website that has a list of articles that talk about failures in Kubernetes implementation. If you read those stories, you can avoid those mistakes in your kubernetes implementation.

    3. Case Studies From Organizations: Spend time on use cases published by organizations on Kubernetes usage and scaling. You can learn a lot from them. Following are some of the case studies that are worth reading.

      • Scheduling 300,000 Kubernetes Pods in Production Daily
      • Scaling Kubernetes to 7,500 Nodes

    Real-World Kubernetes Case Studies

    When I spoke to the DevOps community, I found that a common issue was the lack of real-world experience with Kubernetes. If you don’t have an active Kubernetes project in your organization, you can refer to case studies and learning materials published by organizations that use Kubernetes. This will also help you in Kubernetes interviews.

    Here are some good real-world Kubernetes case studies that can enhance your Kubernetes knowledge:

    1. List of Kubernetes User Case Studies (Official Case Studies)
    2. How OpenAI Scaled Kubernetes to 7,500 Nodes (Blog)
    3. Testing 500 Pods Per Node (Blog)
    4. Dynamic Kubernetes Cluster Scaling at Airbnb (Blog)
    5. Scaling 100 to 10,000 pods on Amazon EKS (Blog)

    Kubernetes Failures/Learnings

    Kubernetes Deployment Tools (GitOps Based)

    GitOps is a technical practice that uses Git as a single source of truth for declarative infrastructure and application code.

    Some popular GitOps-based tools for deploying applications to Kubernetes clusters are:

    Additional New Section 2024 Content:-

    1. Advanced Kubernetes Networking

    Service Mesh Overview

    • Description: Service meshes help manage microservices networking by abstracting complex traffic management (routing, load balancing, retries, etc.).
    • Popular Tools: Istio, Linkerd, and Consul.
    • Use Cases: Improved resilience, observability, security, and traffic control.

    Network Policies Deep Dive

    • Description: Network policies allow admins to define allowed connections between pods and namespaces, enhancing security.
    • Example: Create policies to block or allow traffic within namespaces, useful for internal and external isolation.

    Ingress Controllers Comparison

    • Overview: Explore NGINX Ingress, Traefik, and HAProxy, discussing pros and cons and optimal use cases.
    • Setup and Examples: Walkthrough on setting up each Ingress Controller with example configurations for HTTP and HTTPS routing.

    2. Kubernetes Observability and Monitoring

    Monitoring Setup

    • Description: Setting up Prometheus and Grafana for comprehensive monitoring and visualization.
    • Key Metrics: Pod CPU/memory usage, node health, and custom metrics.
    • Setup Guide: Installation, configuration, and Grafana dashboard examples for Kubernetes clusters.

    Distributed Tracing

    • Overview: Explanation of distributed tracing and its importance in monitoring microservices.
    • Setup: Guide to integrating Jaeger or OpenTelemetry with a sample application.
    • Visualization: View and analyze request traces across services to identify bottlenecks.

    Log Aggregation

    • Introduction: Importance of centralized log management.
    • Stack Setup: Setting up an EFK (Elasticsearch, Fluentd, Kibana) stack, with tips on log storage and retention.
    • Best Practices: Log rotation, alerting, and monitoring logs for Kubernetes events.

    3. Advanced Cluster Management and Maintenance

    Automated Scaling

    • Overview: Types of scaling – Cluster Autoscaler, Horizontal Pod Autoscaler (HPA), and Vertical Pod Autoscaler (VPA).
    • Setup Guide: How to configure autoscalers, with scenarios and examples.

    Backup and Disaster Recovery

    • Why It Matters: Explanation of the importance of backups, especially for etcd (Kubernetes’ key-value store).
    • Guide: Steps for backing up etcd and restoring it, with disaster recovery plan best practices.

    Cluster Upgrades

    • Description: Overview of the upgrade process and planning.
    • Procedure: Step-by-step instructions for safely upgrading Kubernetes, managing node pools, and testing upgrades.

    4. Security Best Practices

    Zero-Trust Networking

    • Description: Introduction to Zero-Trust principles in Kubernetes.
    • Implementation: Use network policies and mutual TLS (mTLS) to enforce zero-trust.

    Securing Workloads with Pod Security Policies (PSP)

    • Overview: How PSPs enforce security standards on containers (e.g., limiting root access, requiring certain security contexts).
    • Examples: Sample PSPs with detailed explanations for different security levels.

    Image Security

    • Importance: Why image security is critical.
    • Tools and Setup: Integration of Trivy or Clair to automate scanning and detect vulnerabilities.
    • Example Workflow: Setting up image scanning in a CI/CD pipeline.

    Secrets Management

    • Best Practices: Using Kubernetes secrets for sensitive data and avoiding hard-coded values.
    • Vault Integration: Step-by-step guide to integrating HashiCorp Vault for secrets management.

    5. Application Deployment Strategies

    Advanced GitOps

    • Overview: Advanced GitOps concepts using tools like ArgoCD or Flux for continuous deployment.
    • Examples: Real-world examples of GitOps with features like rollback, progressive delivery, and A/B testing.

    Blue-Green Deployments

    • What It Is: Introduction to Blue-Green deployments to reduce downtime.
    • Steps: Walkthrough on creating a Blue-Green deployment in Kubernetes using Services and Ingress.

    Canary Releases

    • Definition: Canary releases gradually introduce updates to a small subset of users.
    • Setup: Using Argo Rollouts to implement canary releases, with sample configurations.

    6. Troubleshooting Kubernetes Clusters

    Common Issues and Solutions

    • Overview: Addressing common issues like failing pods, crashed nodes, and failed deployments.
    • Solutions: Detailed steps for resolving each issue, including example scenarios and kubectl commands.

    Kubernetes Debugging Tools

    • Tools Overview: Tools like kubectl-debug, K9s, and kube-ops-view for monitoring and troubleshooting.
    • Usage: Example scenarios and tool usage for real-time issue identification.

    CrashLoopBackOff and OOMKill Handling

    • Description: Explanation of common pod errors, including CrashLoopBackOff and Out of Memory (OOM) issues.
    • Resolution Steps: How to identify, troubleshoot, and resolve these issues.

    7. Additional Resources

    Certification Study Guides

    • Exams Covered: CKA (Certified Kubernetes Administrator), CKAD (Certified Kubernetes Application Developer), and CKS (Certified Kubernetes Security Specialist).
    • Resources: Links to official documentation, practice labs, and study guides.

    Community and News Sources

    • News and Blogs: Resources to stay updated with Kubernetes trends, like CNCF blog, Kubernetes Podcast, and KubeWeekly.
    • Community Forums: Links to Kubernetes Slack channels, Stack Overflow, and other communities for support.

    Contribute and Collaborate

    Tip

    This repository thrives on community contributions and collaboration. Here’s how you can get involved:

    • Fork the Repository: Create your own copy of the repository to work on.
    • Submit Pull Requests: Contribute your projects or improvements to existing projects by submitting pull requests.
    • Engage with Others: Participate in discussions, provide feedback on others’ projects, and collaborate to create better solutions.
    • Share Your Knowledge: If you’ve developed a new project or learned something valuable, share it with the community. Your contributions can help others in their learning journey.

    Join the Community

    Important

    We encourage you to be an active part of our community:

    • Join Our Telegram Community: Connect with fellow DevOps enthusiasts, ask questions, and share your progress in our Telegram group.
    • Follow Me on GitHub: Stay updated with new projects and content by following me on GitHub.

    Code of Conduct

    Caution

    We are committed to fostering a welcoming and respectful environment for all contributors. Please take a moment to review our Code of Conduct before participating in this community.

    Hit the Star! ⭐

    If you find this repository helpful and plan to use it for learning, please give it a star. Your support is appreciated!

    🛠️ Author & Community

    This project is crafted by Harshhaa 💡.
    I’d love to hear your feedback! Feel free to share your thoughts.

    📧 Connect with me:


    📢 Stay Connected

    Follow Me

    Visit original content creator repository