Blog

  • ledger-parse

    Description

    A very simple ledger-parser for ledger-cli journal files. It probably does not parse everything and it is probably buggy. Handle with care!

    Installation

    Simply import this module either by putting the ledger-parse.py file into the folder of your python script, or into the pythons modules path (maybe something like /usr/lib/python2.7). Then just use import ledgerparse to import the module.

    A good workflow for me is: generating a symbolic link to my python2.7 lib path so that I can import the module even if it’s not in the actual pythons script path, while using always the latest version of the script. You can make such a symbolic link by entering the ledger-parse folder and firing this command: sudo ln -s $PWD/ledgerparse.py /usr/lib/python2.7/ledgerparse.py.

    Usage

    Load a ledger formatted journal into a variable – get the content as a simple string. With this string you use the ledgerparse.string_to_ledger() function to generate a list of ledgerparse.ledger_transaction objects. Example:

    Γ¬mport ledgerparse
    
    # get the content of the file into a variable
    f = open('ledger.journal')
    JOURNAL_STRING = f.read()
    f.close()
    
    # convert the string into a list with transaction-objects
    JOURNAL = ledgerparse.string_to_ledger(JOURNAL_STRING)

    Then you have all the transaction entries from the ledger journal in a list and can access them for example like this:

    # print the first transaction - this will print a string
    print JOURNAL[0]
    
    # get the payee of the first transaction
    print JOURNAL[0].payee
    
    # print the accounts with their amount of the first transaction
    for account in JOURNAL[0].accounts:
        print account

    These variables are accessable:

    # the transaction
    JOURNAL[0].date         # a datetime object
    JOURNAL[0].aux_date     # a datetime object
    JOURNAL[0].state        # string, e.g. '!' or '*'
    JOURNAL[0].code         # string
    JOURNAL[0].payee        # string
    JOURNAL[0].comments     # list of strings
    JOURNAL[0].accounts     # list of ledgerparse.ledger_account objects
    JOURNAL[0].original     # original string of the transaction
    
    # the account
    JOUNRAL[0].accounts[0].name         # string
    JOUNRAL[0].accounts[0].commodity    # string
    JOUNRAL[0].accounts[0].amount       # ledgerparse.Money object
    JOUNRAL[0].accounts[0].comments     # list of strings

    Why not the native ledger module?

    Simple: teh native ledger module which comes with the ledger-cli programm gets all the ledger journal trasactions as postings. A posting can have the connected transaction payee etc. … but it does not really parse the transactions. I wanted a parser which is closer to the journal like you are probably write: a list of transactions.

    IMPORTANT: This module is NOT for calculating like ledger is (at least not YET).

    Example

    If you would like to sort a non-sorted ledger-journal, you could use this script like this:

    import ledgerparse
    
    f = open('ledger.journal')
    JOURNAL_STRING = f.read()
    f.close()
    
    SORTED_JOURNAL_STRING = '\n\n'.join([x.get_original() for x in sorted(ledgerparse.string_to_ledger(JOURNAL_STRING), key=lambda y: y.date)])
    
    f = open('ledger_sorted.journal', 'w')
    f.write(SORTED_JOURNAL_STRING)
    f.close()

    Visit original content creator repository
    https://github.com/Tagirijus/ledger-parse

  • ledger-parse

    Description

    A very simple ledger-parser for ledger-cli journal files. It probably does not parse everything and it is probably buggy. Handle with care!

    Installation

    Simply import this module either by putting the ledger-parse.py file into the folder of your python script, or into the pythons modules path (maybe something like /usr/lib/python2.7). Then just use import ledgerparse to import the module.

    A good workflow for me is: generating a symbolic link to my python2.7 lib path so that I can import the module even if it’s not in the actual pythons script path, while using always the latest version of the script. You can make such a symbolic link by entering the ledger-parse folder and firing this command: sudo ln -s $PWD/ledgerparse.py /usr/lib/python2.7/ledgerparse.py.

    Usage

    Load a ledger formatted journal into a variable – get the content as a simple string. With this string you use the ledgerparse.string_to_ledger() function to generate a list of ledgerparse.ledger_transaction objects. Example:

    Γ¬mport ledgerparse
    
    # get the content of the file into a variable
    f = open('ledger.journal')
    JOURNAL_STRING = f.read()
    f.close()
    
    # convert the string into a list with transaction-objects
    JOURNAL = ledgerparse.string_to_ledger(JOURNAL_STRING)

    Then you have all the transaction entries from the ledger journal in a list and can access them for example like this:

    # print the first transaction - this will print a string
    print JOURNAL[0]
    
    # get the payee of the first transaction
    print JOURNAL[0].payee
    
    # print the accounts with their amount of the first transaction
    for account in JOURNAL[0].accounts:
        print account

    These variables are accessable:

    # the transaction
    JOURNAL[0].date         # a datetime object
    JOURNAL[0].aux_date     # a datetime object
    JOURNAL[0].state        # string, e.g. '!' or '*'
    JOURNAL[0].code         # string
    JOURNAL[0].payee        # string
    JOURNAL[0].comments     # list of strings
    JOURNAL[0].accounts     # list of ledgerparse.ledger_account objects
    JOURNAL[0].original     # original string of the transaction
    
    # the account
    JOUNRAL[0].accounts[0].name         # string
    JOUNRAL[0].accounts[0].commodity    # string
    JOUNRAL[0].accounts[0].amount       # ledgerparse.Money object
    JOUNRAL[0].accounts[0].comments     # list of strings

    Why not the native ledger module?

    Simple: teh native ledger module which comes with the ledger-cli programm gets all the ledger journal trasactions as postings. A posting can have the connected transaction payee etc. … but it does not really parse the transactions. I wanted a parser which is closer to the journal like you are probably write: a list of transactions.

    IMPORTANT: This module is NOT for calculating like ledger is (at least not YET).

    Example

    If you would like to sort a non-sorted ledger-journal, you could use this script like this:

    import ledgerparse
    
    f = open('ledger.journal')
    JOURNAL_STRING = f.read()
    f.close()
    
    SORTED_JOURNAL_STRING = '\n\n'.join([x.get_original() for x in sorted(ledgerparse.string_to_ledger(JOURNAL_STRING), key=lambda y: y.date)])
    
    f = open('ledger_sorted.journal', 'w')
    f.write(SORTED_JOURNAL_STRING)
    f.close()

    Visit original content creator repository
    https://github.com/Tagirijus/ledger-parse

  • script-export-wp-posts

    WordPress Export Script

    A powerful shell script that exports WordPress posts, custom permalinks, and users from WordPress sites – either locally or remotely via SSH. Generates both CSV and Excel files for SEO audits and data analysis.

    Key Features

    • βœ… Proper CSV Handling: Correctly handles posts with commas, quotes, and special characters in titles
    • πŸ”„ Unified Operation: Single script for both local and remote exports
    • πŸ“Š Excel Generation: Automatic conversion with clickable URLs and admin links
    • πŸ” Dynamic Discovery: Automatically finds all public post types
    • 🌐 Multi-Host Support: Works with Pressable, WP Engine, Kinsta, and more

    CleanShot 2025-04-16 at 11 52 09

    Features

    • Unified Script: Single script handles both local and remote (SSH) exports
    • Post Export: Exports all public post types (posts, pages, custom post types)
    • Custom Permalinks: Captures custom permalink structures if set
    • User Export: Optional export of users with their post counts
    • Excel Generation: Automatically converts CSV to Excel with formulas for clickable URLs
    • SEO-Ready: Includes all necessary data for SEO audits and migration planning
    • Smart SSH: Auto-detects SSH hosts from config and suggests appropriate paths
    • Domain-Named Folders: Export folders include the domain name for easy identification
    • Host Detection: Recognizes common hosts (Pressable, WP Engine, Kinsta) and adapts accordingly

    Usage

    Local Export (when you’re in the WordPress directory)

    ./export_wp_posts.sh

    Remote Export (via SSH)

    ./export_wp_posts.sh --remote
    # or
    ./export_wp_posts.sh -r

    ./export_wp_posts.sh

    Setup Excel Support

    ./enable_excel.sh

    The script will:

    1. Auto-detect available SSH hosts from your ~/.ssh/config (remote mode)
    2. Suggest appropriate WordPress paths based on the host type
    3. Prompt for domain name and user export preference
    4. Generate all files locally in a timestamped, domain-named folder

    Output

    All files are created in a timestamped folder that includes the domain name:

    !export_wp_posts_20250811_143244_example-com/
    β”œβ”€β”€ export_all_posts.csv              # Raw post export
    β”œβ”€β”€ export_custom_permalinks.csv      # Custom permalink data
    β”œβ”€β”€ export_wp_posts_[timestamp].csv   # Final merged CSV
    β”œβ”€β”€ export_wp_posts_[timestamp].xlsx  # Excel file with formulas
    β”œβ”€β”€ export_users.csv                  # Raw user export (if enabled)
    β”œβ”€β”€ export_users_with_post_counts.csv # Users with post counts (if enabled)
    └── export_debug_log.txt             # Debug information (if DEBUG=1)
    

    Excel File Structure

    The generated Excel file includes:

    • Row 1: Editable base domain (change this to update all URLs)
    • Row 2: Column headers
    • Column A: Formula-generated full URLs (uses custom permalink if exists, otherwise post_name)
    • Column I: Clickable WP Admin edit links

    Requirements

    • Bash: Compatible shell environment
    • For Local Mode:
      • WP-CLI installed and accessible
      • Run from WordPress root directory
    • For Remote Mode:
      • SSH access to the target server
      • WP-CLI installed on the remote server
    • For Excel Generation:
      • Python 3
      • openpyxl package (installed via enable_excel.sh)

    Installing Excel Support

    Run the included setup script:

    ./enable_excel.sh

    This will install openpyxl in your user directory without affecting system Python.

    Supported Hosts (Remote Mode)

    The script recognizes and adapts to:

    • Pressable: Auto-suggests /htdocs path
    • WP Engine: Auto-detects site path from hostname
    • Kinsta: Suggests standard Kinsta paths
    • SiteGround: Suggests ~/public_html
    • Generic hosts: Default to ~/public_html

    Exported Data

    Posts Export (7 columns)

    1. ID: Post ID
    2. post_title: Title (sanitized, commas removed)
    3. post_name: URL slug
    4. custom_permalink: Custom permalink if set
    5. post_date: Publication date
    6. post_status: Status (publish, draft, etc.)
    7. post_type: Type (post, page, custom types)

    Users Export (8 columns if enabled)

    1. ID: User ID
    2. user_login: Username
    3. user_email: Email address
    4. first_name: First name
    5. last_name: Last name
    6. display_name: Display name
    7. roles: User roles
    8. post_count: Number of posts authored (N/A for remote exports)

    Troubleshooting

    SSH Connection Issues

    • For Pressable hosts, the script automatically uses -T flag to disable pseudo-terminal
    • Connection keepalive is enabled with 5-second intervals
    • Large exports may cause connections to close – this is normal and handled gracefully

    Excel Generation

    • If Excel generation fails, ensure Python and openpyxl are installed
    • Run ./enable_excel.sh to set up Excel support
    • CSV files can always be imported into Excel/Google Sheets manually

    Debug Mode

    To enable detailed logging, edit the script and set:

    DEBUG=1

    Examples

    Local WordPress Export

    cd /var/www/mysite
    ./export_wp_posts.sh
    # Enter domain: mysite.com
    # Include users? y

    Remote Pressable Export

    ./export_wp_posts.sh --remote
    # Select host: 1 (pressable-site)
    # Confirm path: /htdocs
    # Enter domain: client-site.com
    # Include users? n

    Legacy Script

    The original local-only script is preserved as export_wp_posts_legacy.sh for reference.

    Contributing

    Feel free to submit issues and enhancement requests!

    License

    MIT License – see LICENSE file for details

    Author

    Eric Rasch
    GitHub: https://github.com/ericrasch/script-export-wp-posts

    Future Enhancements (TODO)

    1. Configuration File Support

    • Save settings the first time a user runs the script
    • Incrementally add domains to the configuration file as they’re used
    • Present previously used domains as options when script is rerun
    • Save SSH favorites from recently used connections
    • Include last exported date for each domain

    2. Export Profiles/Templates

    • Add ability to save and reuse export configurations (e.g., --profile seo-audit)
    • Different profiles for different use cases (migration, audit, backup)

    3. Incremental/Delta Exports

    • Export only posts modified since last export
    • Options like --since "2025-08-01" or --since-last-export

    4. Additional Export Formats

    • JSON export for programmatic processing
    • SQL export for direct database dumps
    • Markdown export for documentation

    5. Enhanced Error Recovery

    • Automatic retry on SSH connection failures
    • Resume capability for interrupted exports
    • Better timeout handling for large sites

    6. Export Validation & Reports

    • Check for broken internal links
    • Identify missing featured images
    • Find duplicate slugs/permalinks
    • Generate summary reports with potential issues

    7. Bulk Operations Support

    • Export from multiple sites in one run
    • Support for batch configuration files

    8. Custom Field Support

    • Export specific custom fields/meta data
    • Option like --meta-keys "seo_title,seo_description"

    9. Performance Enhancements

    • Parallel processing for large exports
    • Compression of export files
    • Option to exclude post content for faster exports
    • Option to exclude specific post-types

    10. Integration Features

    • Webhook notifications on completion
    • Direct upload to Google Drive/Dropbox
    • Email export results

    11. Data Transformation Options

    • Convert relative URLs to absolute
    • Strip HTML from titles/content
    • Normalize date formats

    12. Security Enhancements

    • Encrypted exports for sensitive data
    • Audit log of exports
    • Option to redact personal data
    Visit original content creator repository https://github.com/ericrasch/script-export-wp-posts
  • paytm-clone

    Turborepo starter

    This is an official starter Turborepo.

    Using this example

    Run the following command:

    npx create-turbo@latest

    What’s inside?

    This Turborepo includes the following packages/apps:

    Apps and Packages

    • docs: a Next.js app
    • web: another Next.js app
    • @repo/ui: a stub React component library shared by both web and docs applications
    • @repo/eslint-config: eslint configurations (includes eslint-config-next and eslint-config-prettier)
    • @repo/typescript-config: tsconfig.jsons used throughout the monorepo

    Each package/app is 100% TypeScript.

    Utilities

    This Turborepo has some additional tools already setup for you:

    Build

    To build all apps and packages, run the following command:

    cd my-turborepo
    pnpm build
    

    Develop

    To develop all apps and packages, run the following command:

    cd my-turborepo
    pnpm dev
    

    Remote Caching

    Turborepo can use a technique known as Remote Caching to share cache artifacts across machines, enabling you to share build caches with your team and CI/CD pipelines.

    By default, Turborepo will cache locally. To enable Remote Caching you will need an account with Vercel. If you don’t have an account you can create one, then enter the following commands:

    cd my-turborepo
    npx turbo login
    

    This will authenticate the Turborepo CLI with your Vercel account.

    Next, you can link your Turborepo to your Remote Cache by running the following command from the root of your Turborepo:

    npx turbo link
    

    Useful Links

    Learn more about the power of Turborepo:

    Visit original content creator repository
    https://github.com/pranaydwivedi444/paytm-clone

  • efrt

    compression of key-value data
    npm install efrt

    if your data looks like this:

    var data = {
      bedfordshire: 'England',
      aberdeenshire: 'Scotland',
      buckinghamshire: 'England',
      argyllshire: 'Scotland',
      bambridgeshire: 'England',
      cheshire: 'England',
      ayrshire: 'Scotland',
      banffshire: 'Scotland'
    }

    you can compress it like this:

    import { pack } from 'efrt'
    var str = pack(data)
    //'England:b0che1;ambridge0edford0uckingham0;shire|Scotland:a0banff1;berdeen0rgyll0yr0;shire'

    then _very!_ quickly flip it back into:

    import { unpack } from 'efrt'
    var obj = unpack(str)
    obj['bedfordshire'] //'England'

    Yep,

    efrt packs category-type data into a very compressed prefix trie format, so that redundancies in the data are shared, and nothing is repeated.

    By doing this clever-stuff ahead-of-time, efrt lets you ship much more data to the client-side, without hassle or overhead.

    The whole library is 8kb, the unpack half is barely 2kb.

    it is based on:

    Benchmarks!

    Basically,
    • get a js object into very compact form
    • reduce filesize/bandwidth a bunch
    • ensure the unpacking time is negligible
    • keep word-lookups on critical-path
    import { pack, unpack } from 'efrt' // const {pack, unpack} = require('efrt')
    
    var foods = {
      strawberry: 'fruit',
      blueberry: 'fruit',
      blackberry: 'fruit',
      tomato: ['fruit', 'vegetable'],
      cucumber: 'vegetable',
      pepper: 'vegetable'
    }
    var str = pack(foods)
    //'{"fruit":"bl0straw1tomato;ack0ue0;berry","vegetable":"cucumb0pepp0tomato;er"}'
    
    var obj = unpack(str)
    console.log(obj.tomato)
    //['fruit', 'vegetable']

    or, an Array:

    if you pass it an array of strings, it just creates an object with true values:

    const data = [
      'january',
      'february',
      'april',
      'june',
      'july',
      'august',
      'september',
      'october',
      'november',
      'december'
    ]
    const packd = pack(data)
    // true¦a6dec4febr3j1ma0nov4octo5sept4;rch,y;an1u0;ly,ne;uary;em0;ber;pril,ugust
    const sameArray = Object.keys(unpack(packd))
    // same thing !

    Reserved characters

    the keys of the object are normalized. Spaces/unicode are good, but numbers, case-sensitivity, and some punctuation (semicolon, comma, exclamation-mark) are not (yet) supported.

    specialChars = new RegExp('[0-9A-Z,;!:|Β¦]')

    efrt is built-for, and used heavily in compromise, to expand the amount of data it can ship onto the client-side. If you find another use for efrt, please drop us a line🎈

    Performance

    efrt is tuned to be very quick to unzip. It is O(1) to lookup. Packing-up the data is the slowest part, which is usually fine:

    var compressed = pack(skateboarders) //1k words (on a macbook)
    var trie = unpack(compressed)
    // unpacking-step: 5.1ms
    
    trie.hasOwnProperty('tony hawk')
    // cached-lookup: 0.02ms

    Size

    efrt will pack filesize down as much as possible, depending upon the redundancy of the prefixes/suffixes in the words, and the size of the list.

    • list of countries – 1.5k -> 0.8k (46% compressed)
    • all adverbs in wordnet – 58k -> 24k (58% compressed)
    • all adjectives in wordnet – 265k -> 99k (62% compressed)
    • all nouns in wordnet – 1,775k -> 692k (61% compressed)

    but there are some things to consider:

    • bigger files compress further (see 🎈 birthday problem)
    • using efrt will reduce gains from gzip compression, which most webservers quietly use
    • english is more suffix-redundant than prefix-redundant, so non-english words may benefit from other styles

    Assuming your data has a low category-to-data ratio, you will hit-breakeven with at about 250 keys. If your data is in the thousands, you can very be confident about saving your users some considerable bandwidth.

    Use

    IE9+

    <script src="https://unpkg.com/efrt@latest/builds/efrt.min.cjs"></script>
    <script>
      var smaller = efrt.pack(['larry', 'curly', 'moe'])
      var trie = efrt.unpack(smaller)
      console.log(trie['moe'])
    </script>

    if you’re doing the second step in the client, you can load just the CJS unpack-half of the library(~3k):

    const unpack = require('efrt/unpack') // node/cjs
    <script src="https://unpkg.com/efrt@latest/builds/efrt-unpack.min.cjs"></script>
    <script>
      var trie = unpack(compressedStuff)
      trie.hasOwnProperty('miles davis')
    </script>

    Thanks to John Resig for his fun trie-compression post on his blog, and Wiktor Jakubczyc for his performance analysis work

    MIT

    Visit original content creator repository https://github.com/spencermountain/efrt
  • Exploring-World-Population-With-R

    Exploring World Population With R

    In this project, I explored and analyzed a world population dataset in the R programming language. Utilizing tidyverse, I created two choropleth maps. One map shows the growth rate by continent, and the other shows the growth rate of African countries.

    Growth Rate By Continent | Choropleth

    Context

    The world’s population has undergone remarkable growth, exceeding 7.5 billion by mid-2019 and continuing to surge beyond previous estimates. Notably, China and India stand as the two most populous countries, with China’s population potentially facing a decline while India’s trajectory hints at surpassing it by 2030. This significant demographic shift is just one facet of a global landscape where countries like the United States, Indonesia, Brazil, Nigeria, and others, each with populations surpassing 100 million, play pivotal roles.

    The steady decrease in growth rates, though, is reshaping projections. While the world’s population is expected to exceed 8 billion by 2030, growth will notably decelerate compared to previous decades. Specific countries like India, Nigeria, and several African nations will notably contribute to this growth, potentially doubling their populations before rates plateau.

    Can you tell which countries in Africa have the highest growth?

    Growth Rate in Africa | Choropleth

    Content

    This dataset provides comprehensive historical population data for countries and territories globally, offering insights into various parameters such as area size, continent, population growth rates, rankings, and world population percentages. Spanning from 1970 to 2023, it includes population figures for different years, enabling a detailed examination of demographic trends and changes over time.

    Dataset

    Structured with meticulous detail, this dataset offers a wide array of information in a format conducive to analysis and exploration. Featuring parameters like population by year, country rankings, geographical details, and growth rates, it serves as a valuable resource for researchers, policymakers, and analysts. Additionally, the inclusion of growth rates and world population percentages provides a nuanced understanding of how countries contribute to global demographic shifts.

    This dataset is invaluable for those interested in understanding historical population trends, predicting future demographic patterns, and conducting in-depth analyses to inform policies across various sectors such as economics, urban planning, public health, and more.

    Structure

    This dataset (world_population_data.csv) covering from 1970 up to 2023 includes the following columns:

    Column Description
    Rank Rank by Population
    CCA3 3 Digit Country/Territories Code
    Country Name of the Country
    Continent Name of the Continent
    population_2023 Population of the Country in the year 2023
    population_2022 Population of the Country in the year 2022
    population_2020 Population of the Country in the year 2020
    population_2015 Population of the Country in the year 2015
    population_2010 Population of the Country in the year 2010
    population_2000 Population of the Country in the year 2000
    population_1990 Population of the Country in the year 1990
    population_1980 Population of the Country in the year 1980
    population_1970 Population of the Country in the year 1970
    area_kms_squared Area size of the Country/Territories in square kilometer
    density_kms_squared Population Density per square kilometer
    growth_rate Population Growth Rate by Country
    world_percentage Population percentage by each Country

    Acknowledgements

    Thanks to @sazidthe1 who provided the dataset to Kaggle.

    Visit original content creator repository https://github.com/texasbe2trill/Exploring-World-Population-With-R
  • remix-mythx-plugin

    Remix MythX plugin

    github pages

    Performs Static and Dynamic Security Analysis using the MythX Cloud Service.

    Install plugin

    In order to start using the plugin you need to activate it in plugin manager.

    Plugin activation

    The plugin has dependency Solidity Compiler plugin, you need to activate in also.

    Credentials

    Plugin settings

    You need to open the plugin and click on ‘MythX Settings’ button on the bottom of the plugin. There is a block with credentials on top of the plugin’s Settings page. Those will be used for execution security analysis via MythX. There are trial credentials by default. You can use those credentials to analyze your contracts, but the report will be restricted.

    You can create own account on mythx.io

    Execution

    1. Select smart contract in a File explorer
    2. Compile the contract in Solidity compiler plugin
    3. Open MythX plugin
    4. Select needed contract
    5. Click Analize, wait response

    Report

    Plugin report When the report is received you will see a list of issues. You can click on an issue it will highlight the place of the issue in a code.

    Troubleshooting

    1. If you run the plugin locally on Chrome, you may face with a white screen issue. The issue happens when a plugin uses more than 10% of the allocated resources for a page. In such a way browser detect and prevent malicious behavior of non-origin content, which is rendered in iframe on the page. The browser stops rendering of the content and waits until the sub-frame process stops using so many resources.

      Solutions:

      1. Make sure that you build the plugin for the production environment

      2. Make sure that your React Chrome extension is disabled

    2. Brave browser error:

      Failed to read the 'localStorage' property from 'Window': Access is denied for this document.

      Solution:

      Set Cookies setting to Allow all cookies on chrome://settings/shields page

    Deployment

    Install

    When you want to update

    • node tools/ipfs-upload/bin/upload-remix-plugin <path-to-react-build-folder> : Upload to IPFS (Copy the Hash provided)
    • Update version & url of your profile under /plugins/mythx/profile.json
    • Push (should trigger a Github Action that take the new value and update the build/profile.json
    • Create a Pull Request and we approve it.
    Visit original content creator repository https://github.com/aquiladev/remix-mythx-plugin
  • linode-api3

    PHP Latest Stable Version Build Status Code Coverage Scrutinizer Code Quality

    Linode API v3 Client Library for PHP

    This library implements full spec of Linode API v3 (in accordance with https://www.linode.com/api/utility/api.spec), including functions which are not described at the Linode’s site yet (the documentation seems to be slightly outdated at the moment).

    The library wasn’t actually implemented, but autogenerated from the spec. This approach provides several advantages as:

    • we can be sure that nothing from the spec is missed,
    • no implementation errors which could be caused by human factor,
    • in case of the spec extension it’s fast and easy to update the library’s code.

    Also please note that “test.echo” is skipped from the implementation.

    Warning

    The library addresses Linode’s legacy API. For most recent API please refer to this library.

    Requirements

    PHP needs to be a minimum version of PHP 5.6.

    Installation

    The recommended way to install is via Composer:

    composer require "webinarium/linode-api3"

    Usage Example

    Below is a complete example of how to create a new Linode host using the library:

    use Linode\LinodeApi;
    use Linode\LinodeException;
    use Linode\PaymentTerm;
    
    // Your API key from the Linode profile.
    $key = '...';
    
    // Hardcode some IDs to make the example shorter.
    // Normally you might want to use "Linode\AvailApi" class functions.
    $datacenter = 3;    // Fremont datacenter
    $plan       = 1;    // we will use the cheapest plan
    
    // Create new linode.
    try {
        $api = new LinodeApi($key);
        $res = $api->create($datacenter, $plan, PaymentTerm::ONE_MONTH);
    
        printf("Linode #%d has been created.\n", $res['LinodeID']);
    }
    catch (LinodeException $e) {
        printf("Error #%d: %s.\n", $e->getCode(), $e->getMessage());
    }

    Batching Requests

    The Linode API also supports a batched mode, whereby you supply multiple request sets and receive back an array of responses. Example batch request using the library:

    use Linode\Batch;
    use Linode\LinodeApi;
    use Linode\PaymentTerm;
    
    // Your API key from the Linode profile.
    $key = '...';
    
    // Hardcode some IDs to make the example shorter.
    // Normally you might want to use "Linode\AvailApi" class functions.
    $datacenters = [2, 3, 4, 6];    // all four US datacenters
    $plan        = 1;               // we will use the cheapest plan
    
    // Create a batch.
    $batch = new Batch($key);
    
    // Create new linode on each of US datacenters.
    $api = new LinodeApi($batch);
    
    foreach ($datacenters as $datacenter) {
        $api->create($datacenter, $plan, PaymentTerm::ONE_MONTH);
    }
    
    // Execute batch.
    $results = $batch->execute();
    
    foreach ($results as $res) {
        printf("Linode #%d has been created.\n", $res['DATA']['LinodeID']);
    }

    Tests

    Almost all tests are mocked so you don’t have to use a real API key and no real linodes are affected. The only tests which make a complete API call are TestTest (for “test.echo“) and ApiTest (for “api.spec“):

    ./bin/phpunit --coverage-text

    Library regeneration

    If you would like to regenerate the library code, you can do it with two simple steps:

    php ./generator/generator
    php ./bin/php-cs-fixer fix
    Visit original content creator repository https://github.com/webinarium/linode-api3
  • awesome-material-components

    Awesome Material Components Awesome Tweet

    A curated list of awesome projects related to Google’s Material Components.

    Awesome Material Components is a collection of resources related to the official Google’s Material Components library. It’s opposed to the community-based implementations of Material Design featured in another list.

    The purpose of this list is to increase the adoption of Material Components by sharing the knowledge about its community. So, if you have an interesting MDC-based project or tutorial, feel free to contribute.

    Please don’t forget to star this repo and share it among your friends! Thank you!

    Contents

    Material Components Web (MDC Web)

    MDC Web Resources

    MDC Web Framework Integrations

    Projects Using MDC Web

    Material Components Android (MDC Android)

    MDC Android Resources

    Material Components iOS (MDC iOS)

    MDC iOS Resources

    Contribute

    Contributions welcome! Please read the contribution guidelines first.

    License

    CC-BY-SA-4.0

    Visit original content creator repository https://github.com/webdenim/awesome-material-components
  • MOON-BNB-Reservations-Microservices-API-NestJS-MongoDB-Mongoose-Passport-Joi

    MOON BNB

    Work in progress. Please check back soon

    Tired of the same old Earth vacations? Ready for a celestial adventure that’s out of this world? Look no further than MoonBnB, the first and only vacation rental service on the Moon!

    πŸš€ Space-Age Stays

    Choose from a variety of MoonBnB listings, from cozy lunar cabins to futuristic space domes. Our accommodations are equipped with state-of-the-art life support systems, anti-gravity beds for a good night’s sleep, and windows designed for optimal stargazing. Feel the tranquility of the cosmos as you unwind in your private moon base.

    πŸŒ™ Lunar Luxuries Await You

    Step into a new era of space tourism with our lunar accommodations. Whether you’re a seasoned astronaut or a first-time space traveler, MoonBnB has the perfect lodging for you. Enjoy breathtaking views of Earthrise, take a leisurely stroll in the one-sixth gravity, and witness the moonlit landscapes that will leave you awestruck.

    πŸ›°οΈ Local Experiences

    Immerse yourself in the lunar lifestyle with our curated local experiences. Take a guided moonwalk tour, enjoy a low-gravity spa day, or savor the flavors of space cuisine at our cosmic cafes. MoonBnB hosts are dedicated to making your stay unforgettable, providing insider tips on the best lunar spots to explore.

    πŸͺΆ Zero-Gravity Adventures

    Looking for a thrill? MoonBnB offers a range of lunar activities for the adventurous spirit. From zero-gravity bungee jumping to moon buggy races, there’s no shortage of excitement. Capture your unforgettable moments with Earth in the background for envy-inducing social media posts.

    πŸ”’ Secure and Seamless Booking

    Rest easy knowing that your MoonBnB stay is secure. Our booking process is as smooth as a rocket launch, and our lunar customer support team is always ready to assist. We’ve got the logistics covered so you can focus on enjoying your lunar vacation.

    🌌 MoonBnB – Beyond Your Wildest Orbits

    MoonBnB isn’t just a vacation; it’s a celestial escape that’s truly out of this world. Book your lunar adventure today and make memories that are, quite literally, astronomical! 🌌🌠

    Visit original content creator repository
    https://github.com/pjborowiecki/MOON-BNB-Reservations-Microservices-API-NestJS-MongoDB-Mongoose-Passport-Joi