halo
goup
Arthur HALET
27 yearsbirth (birthday) - permis
tphone: 06-79-21-39-53

E: arthurh.halet@gmail.com
fr.png uk.png

Projets personnels



basicauth-transport buildpack-always-fail cf-chocolatey ciwatch
A simple transport to wrap other transport to add basic auth to requests A buildpack to show what's wrong in cloudfoundry's dea A chocolatey package to install cloudfoundry cli This is a useful tool to see all status from your ci tools in a wink (tools like travis, scrutinizer, SensioLabsInsight or CodeClimate).
Voir le readme

basicauth-transport

A simple transport to wrap other transport to add basic auth to request.

Usage

package main

import (
    "github.com/ArthurHlt/basicauth-transport"
    "net/http"
)

func main() {

    http.DefaultClient.Transport = transport.NewDefaultBasicAuthTransport("username", "password")

    // with  a custom transport
    http.DefaultClient.Transport = transport.NewBasicAuthTransport(
        "username", 
        "password",
        &http.Transport{})
}
Voir le readme

Not Found

Voir le readme

cf-chocolatey

A chocolatey package to install official command line client for Cloud Foundry.

Installation

  1. Install chocolatey (follow instructions chocolatey: https://chocolatey.org/
  2. Run in your prefered cli $ choco install cf
  3. You've done.

Rebuild

To maintain always up-to-date this choco package with https://github.com/cloudfoundry/cli I've made a php script which can be run everyday.

To try it run with php in cli command $ php rebuild.php.

It will:

  1. Check if version between this github and the https://github.com/cloudfoundry/cli are the same
  2. Recreate tools\chocolateyinstall.ps1 and cf.nuspec with this new version.
  3. Repack the nugget package with the cpack command.
  4. Push to https://chocolatey.org this new package.
Voir le readme

CIWatch

This is a useful tool to see all status from your ci tools in a wink (tools like travis, scrutinizer, SensioLabsInsight or CodeClimate).

It provides also tools to directly create CI environment in one time and permit you to restart inspection on a repo.

[Logo CIWatch](Logo CIWatch)



deaph dialog-watson-client dockerfiles echo-colors
A php deployer Client for dialog watson module Repo of dockerfiles Echo in CLI with colors easily by using colorstring
Voir le readme

What is deaph

Deaph is a deployer very flexible which you can use to deploy what you want through FTP, SFTP, Dropbox, Zip, Local or Amazon S3 After deploying your files you can use step as the same way as puppet or chief.

Installation

Deaph is .phar file you can download it.

How to use

Voir le readme

dialog-watson-client

Client for dialogs watson module

Requirements

  • Python 2.7
  • Pip

Installation

Install with pip: pip install dialog-watson-client

Run the playground

Simply run in command line: dialog-watson-client --name=dialog-name path/to/dialog/file [--clean](optional: clean your dialogs in watson) and you will chat with your robot

At the first launch it will create a config file located to ~/.config-dialog-watson.yml and ask you your watson credentials

Usage for developers

Bootstrap example:

from dialog_watson_client.Client import Client
watsonClient = Client('user_watson', 'password_watson', 'file/path/to/dialog', 'your_dialog_name') # this library abstract the registering of dialog (and the update when you cahnge it) and run it, to do that it will store your dialog id in a file called `dialog_id_file.txt`
watsonClient.start_dialog() # this will create the dialog into watson or update it and run the initialization of the conversation

resp = watsonClient.converse('hi') # talk to the robot, here it will say 'hi' and watson will answered
print resp.response # show the response from watson
watsonClient.get_profile().get_data() # get extracted data from watson in format: [key => value]

Note: If your file is in xml (and you have lxml lib installed) it will also check your format with xsd: https://www.ibm.com/smarterplanet/us/en/ibmwatson/developercloud/doc/dialog/download/WatsonDialogDocument_1.0.xsd

Voir le readme

dockerfiles

Repo of dockerfiles

Voir le readme

echo-colors

Echo in CLI with colors easily by using colorstring.

Installation

On *nix system

You can install this via the command-line with either curl or wget.

via curl

$ sh -c "$(curl -fsSL https://raw.github.com/ArthurHlt/echo-colors/master/bin/install.sh)"

via wget

$ sh -c "$(wget https://raw.github.com/ArthurHlt/echo-colors/master/bin/install.sh -O -)"

On windows

You can install it by downloading the .exe corresponding to your cpu from releases page: https://github.com/ArthurHlt/echo-colors/releases . Alternatively, if you have terminal interpreting shell you can also use command line script above, it will download file in your current working dir.

From go command line

Simply run in terminal:

$ go get github.com/ArthurHlt/echo-colors

Usage

NAME:
   echoc - Echo in colors some text easily

USAGE:
   echoc "[red] this is red [yellow]and yellow [_red_]with red background [bold] and bold [reset] and no colors"

VERSION:
   1.0.0

COMMANDS:
GLOBAL OPTIONS:
   -n           Optional. Do not print the trailing newline character
   --help, -h       show help
   --version, -v    print the version


ep_cloudfoundry generate-sql-data annotations arhframe
Etherpad lite plugin for cloudfoundry support Generate sql data for a given size The arhframe php framework.
Voir le readme

ep_cloudfoundry

Voir le readme

generate-sql-data

Generate sql data for a given size

Installation

On *nix system

You can install this via the command-line with either curl or wget.

via curl

$ sh -c "$(curl -fsSL https://raw.github.com/ArthurHlt/generate-sql-data/master/bin/install.sh)"

via wget

$ sh -c "$(wget https://raw.github.com/ArthurHlt/generate-sql-data/master/bin/install.sh -O -)"

On windows

You can install it by downloading the .exe corresponding to your cpu from releases page: https://github.com/ArthurHlt/generate-sql-data/releases . Alternatively, if you have terminal interpreting shell you can also use command line script above, it will download file in your current working dir.

From go command line

Simply run in terminal:

$ go get github.com/ArthurHlt/generate-sql-data

Usage

Usage: generate-sql-data [file size] [file name] (e.g.: generate-sql-data 1mb fakedata.sql)
Voir le readme

Not Found

Voir le readme

Not Found



iocart util yamlarh
Ioc which use the spring style. Util libraries for arhframe Yml injector for arhframe in standalone
Voir le readme

IocArt

IocArt is an another IOC (inversion of Control) and he is close too Spring Ioc style. The main point is that IocArt have his context file in yml. Bean is "class" wich you can inject inside their properties:

  • Another bean
  • Property file
  • A yaml file read by yamlarh
  • A stream

You can also import other yaml context in a yaml context

Installation

Through Composer, obviously:

{
    "require": {
        "arhframe/iocart": "1.*"
    }
}

Usage

use Arhframe\IocArt\BeanLoader;

$beanLoader = BeanLoader::getInstance();
$beanLoader->loadContext('your/yaml/file/for/context');

Examples

Voir le readme

util

Util libraries for arhframe

Voir le readme

Yamlarh

Yml injector for arhframe in standalone. You can inject into your yaml:

  • object
  • constant from scope
  • Variable from global scope
  • Variable from yaml file

You can also import other yaml inside a yaml file for overriding

Installation

Through Composer, obviously:

{
    "require": {
        "arhframe/yamlarh": "1.*"
    }
}

Usage

use Arhframe\Yamlarh\Yamlarh;

$yamlarh = new Yamlarh(__DIR__.'/path/to/yaml/file');
$array = $yamlarh->parse();

Exemple

Variable injection

Variable injection is hierarchical, it will find in this order:

  1. In the yaml file with import
  2. In your global scope
  3. In you constant

Yaml file:

arhframe:
  myvar1: test
  myvar2: %arhframe.myvar1%
  myvar3: %var3%
  myvar4: %VARCONSTANT%

Php file:

use Arhframe\Yamlarh\Yamlarh;
$var3 = 'testvar';
define('VARCONSTANT', 'testconstant');
$yamlarh = new Yamlarh(__DIR__.'/test.yml');
$array = $yamlarh->parse();
echo print_r($array);

Output:

  Array
  (
      [arhframe] => Array
          (
              [myvar1] => test
              [myvar2] => test
              [myvar3] => testvar
              [myvar4] => testconstant
          )

  ) 

Object injection

It use snakeyml (yaml parser for java) style:

arhframe:
  file: !! Arhframe.Util.File(test.php) #will instanciate this: Arhframe\Util\File('test.php') in file var after parsing

Import

Import are also hierarchical the last one imported will override the others. Use @import in your file:

file1.yml

arhframe:
  var1: var
test: arhframe

@import:
 - file2.yml #you can use a relative path to your yaml file or an absolute

file2.yml

arhframe:
  var1: varoverride
test2: var3

After parsing file1.yml, yml will look like:

arhframe:
  var1: varoverride
test: arhframe
test2: var3

Include

You can include a yaml file into another:

file1.yml

arhframe:
  var1: var
test:
  @include:
    - file2.yml #you can use a relative path to your yaml file or an absolute

file2.yml

test2: var3

After parsing file1.yml, yml will look like:

arhframe:
  var1: var
test:
  test2: var3




Contribues aux projets



bosh-cli bosh_exporter cachet-monitor cf-java-client
New BOSH CLI (beta) BOSH Prometheus Exporter Monitors a URL and posts data points to cachet Java client library and tools for Cloud Foundry
Voir le readme

bosh CLI

Usage

Client Library

This project includes director and uaa packages meant to be used in your project for programmatic access to the Director API.

See docs/example.go for a live short usage example.

Developer Notes

Voir le readme

BOSH Prometheus Exporter Build Status

A Prometheus exporter for BOSH metrics. Please refer to the FAQ for general questions about this exporter.

Architecture overview

Installation

Binaries

Download the already existing binaries for your platform:

$ ./bosh_exporter <flags>

From source

Using the standard go install (you must have Go already installed in your local machine):

$ go install github.com/bosh-prometheus/bosh_exporter
$ bosh_exporter <flags>

Docker

To run the bosh exporter as a Docker container, run:

$ docker run -p 9190:9190 boshprometheus/bosh-exporter <flags>

Cloud Foundry

The exporter can be deployed to an already existing Cloud Foundry environment:

$ git clone https://github.com/bosh-prometheus/bosh_exporter.git
$ cd bosh_exporter

Modify the included application manifest file to include your BOSH properties. Then you can push the exporter to your Cloud Foundry environment:

$ cf push

BOSH

This exporter can be deployed using the Prometheus BOSH Release.

Usage

Flags

Flag / Environment Variable Required Default Description
bosh.url
BOSH_EXPORTER_BOSH_URL
Yes BOSH URL
bosh.username
BOSH_EXPORTER_BOSH_USERNAME
[1] BOSH Username
bosh.password
BOSH_EXPORTER_BOSH_PASSWORD
[1] BOSH Password
bosh.uaa.client-id
BOSH_EXPORTER_BOSH_UAA_CLIENT_ID
[1] BOSH UAA Client ID
bosh.uaa.client-secret
BOSH_EXPORTER_BOSH_UAA_CLIENT_SECRET
[1] BOSH UAA Client Secret
bosh.log-level
BOSH_EXPORTER_BOSH_LOG_LEVEL
No ERROR BOSH Log Level (DEBUG, INFO, WARN, ERROR, NONE)
bosh.ca-cert-file
BOSH_EXPORTER_BOSH_CA_CERT_FILE
Yes BOSH CA Certificate file
filter.deployments
BOSH_EXPORTER_FILTER_DEPLOYMENTS
No Comma separated deployments to filter
filter.azs
BOSH_EXPORTER_FILTER_AZS
No Comma separated AZs to filter
filter.collectors
BOSH_EXPORTER_FILTER_COLLECTORS
No Comma separated collectors to filter. If not set, all collectors will be enabled (Deployments, Jobs, ServiceDiscovery)
metrics.namespace
BOSH_EXPORTER_METRICS_NAMESPACE
No bosh Metrics Namespace
metrics.environment
BOSH_EXPORTER_METRICS_ENVIRONMENT
Yes Environment label to be attached to metrics
sd.filename
BOSH_EXPORTER_SD_FILENAME
No bosh_target_groups.json Full path to the Service Discovery output file
sd.processes_regexp
BOSH_EXPORTER_SD_PROCESSES_REGEXP
No Regexp to filter Service Discovery processes names
web.listen-address
BOSH_EXPORTER_WEB_LISTEN_ADDRESS
No :9190 Address to listen on for web interface and telemetry
web.telemetry-path
BOSH_EXPORTER_WEB_TELEMETRY_PATH
No /metrics Path under which to expose Prometheus metrics
web.auth.username
BOSH_EXPORTER_WEB_AUTH_USERNAME
No Username for web interface basic auth
web.auth.password
BOSH_EXPORTER_WEB_AUTH_PASSWORD
No Password for web interface basic auth
web.tls.cert_file
BOSH_EXPORTER_WEB_TLS_CERTFILE
No Path to a file that contains the TLS certificate (PEM format). If the certificate is signed by a certificate authority, the file should be the concatenation of the server's certificate, any intermediates, and the CA's certificate
web.tls.key_file
BOSH_EXPORTER_WEB_TLS_KEYFILE
No Path to a file that contains the TLS private key (PEM format)

[1] When BOSH delegates user managament to UAA, either bosh.username and bosh.password or bosh.uaa.client-id and bosh.uaa.client-secret flags may be used; otherwise bosh.username and bosh.password will be required. When using UAA and the bosh.username and bosh.password authentication method, tokens are not refreshed, so after a period of time the exporter will be unable to communicate with the BOSH API, so use this method only when testing the exporter. For production, it is recommended to use the bosh.uaa.client-id and bosh.uaa.client-secret authentication method.

Metrics

The exporter returns the following metrics:

Metric Description Labels
metrics.namespace_scrapes_total Total number of times BOSH was scraped for metrics environment, bosh_name, bosh_uuid
metrics.namespace_scrape_errors_total Total number of times an error occured scraping BOSH environment, bosh_name, bosh_uuid
metrics.namespace_last_scrape_error Whether the last scrape of metrics from BOSH resulted in an error (1 for error, 0 for success) environment, bosh_name, bosh_uuid
metrics.namespace_last_scrape_timestamp Number of seconds since 1970 since last scrape from BOSH environment, bosh_name, bosh_uuid
metrics.namespace_last_scrape_duration_seconds Duration of the last scrape from BOSH environment, bosh_name, bosh_uuid

The exporter returns the following Deployments metrics:

Metric Description Labels
metrics.namespace_deployment_release_info Labeled BOSH Deployment Release Info with a constant 1 value environment, bosh_name, bosh_uuid, bosh_deployment, bosh_release_name, bosh_release_version
metrics.namespace_deployment_stemcell_info Labeled BOSH Deployment Stemcell Info with a constant 1 value environment, bosh_name, bosh_uuid, bosh_deployment, bosh_stemcell_name, bosh_stemcell_version, bosh_stemcell_os_name
metrics.namespace_last_deployments_scrape_timestamp Number of seconds since 1970 since last scrape of Deployments metrics from BOSH environment, bosh_name, bosh_uuid
metrics.namespace_last_deployments_scrape_duration_seconds Duration of the last scrape of Deployments metrics from BOSH environment, bosh_name, bosh_uuid

The exporter returns the following Jobs metrics:

Metric Description Labels
metrics.namespace_job_healthy BOSH Job Healthy (1 for healthy, 0 for unhealthy) environment, bosh_name, bosh_uuid, bosh_deployment, bosh_job_name, bosh_job_id, bosh_job_index, bosh_job_az, bosh_job_ip
metrics.namespace_job_load_avg01 BOSH Job Load avg01 environment, bosh_name, bosh_uuid, bosh_deployment, bosh_job_name, bosh_job_id, bosh_job_index, bosh_job_az, bosh_job_ip
metrics.namespace_job_load_avg05 BOSH Job Load avg05 environment, bosh_name, bosh_uuid, bosh_deployment, bosh_job_name, bosh_job_id, bosh_job_index, bosh_job_az, bosh_job_ip
metrics.namespace_job_load_avg15 BOSH Job Load avg15 environment, bosh_name, bosh_uuid, bosh_deployment, bosh_job_name, bosh_job_id, bosh_job_index, bosh_job_az, bosh_job_ip
metrics.namespace_job_cpu_sys BOSH Job CPU System environment, bosh_name, bosh_uuid, bosh_deployment, bosh_job_name, bosh_job_id, bosh_job_index, bosh_job_az, bosh_job_ip
metrics.namespace_job_cpu_user BOSH Job CPU User environment, bosh_name, bosh_uuid, bosh_deployment, bosh_job_name, bosh_job_id, bosh_job_index, bosh_job_az, bosh_job_ip
metrics.namespace_job_cpu_wait BOSH Job CPU Wait environment, bosh_name, bosh_uuid, bosh_deployment, bosh_job_name, bosh_job_id, bosh_job_index, bosh_job_az, bosh_job_ip
metrics.namespace_job_mem_kb BOSH Job Memory KB environment, bosh_name, bosh_uuid, bosh_deployment, bosh_job_name, bosh_job_id, bosh_job_index, bosh_job_az, bosh_job_ip
metrics.namespace_job_mem_percent BOSH Job Memory Percent environment, bosh_name, bosh_uuid, bosh_deployment, bosh_job_name, bosh_job_id, bosh_job_index, bosh_job_az, bosh_job_ip
metrics.namespace_job_swap_kb BOSH Job Swap KB environment, bosh_name, bosh_uuid, bosh_deployment, bosh_job_name, bosh_job_id, bosh_job_index, bosh_job_az, bosh_job_ip
metrics.namespace_job_swap_percent BOSH Job Swap Percent environment, bosh_name, bosh_uuid, bosh_deployment, bosh_job_name, bosh_job_id, bosh_job_index, bosh_job_az, bosh_job_ip
metrics.namespace_job_system_disk_inode_percent BOSH Job System Disk Inode Percent environment, bosh_name, bosh_uuid, bosh_deployment, bosh_job_name, bosh_job_id, bosh_job_index, bosh_job_az, bosh_job_ip
metrics.namespace_job_system_disk_percent BOSH Job System Disk Percent environment, bosh_name, bosh_uuid, bosh_deployment, bosh_job_name, bosh_job_id, bosh_job_index, bosh_job_az, bosh_job_ip
metrics.namespace_job_ephemeral_disk_inode_percent BOSH Job Ephemeral Disk Inode Percent environment, bosh_name, bosh_uuid, bosh_deployment, bosh_job_name, bosh_job_id, bosh_job_index, bosh_job_az, bosh_job_ip
metrics.namespace_job_ephemeral_disk_percent BOSH Job Ephemeral Disk Percent environment, bosh_name, bosh_uuid, bosh_deployment, bosh_job_name, bosh_job_id, bosh_job_index, bosh_job_az, bosh_job_ip
metrics.namespace_job_persistent_disk_inode_percent BOSH Job Persistent Disk Inode Percent environment, bosh_name, bosh_uuid, bosh_deployment, bosh_job_name, bosh_job_id, bosh_job_index, bosh_job_az, bosh_job_ip
metrics.namespace_job_persistent_disk_percent BOSH Job Persistent Disk Percent environment, bosh_name, bosh_uuid, bosh_deployment, bosh_job_name, bosh_job_id, bosh_job_index, bosh_job_az, bosh_job_ip
metrics.namespace_job_process_healthy BOSH Job Process Healthy (1 for healthy, 0 for unhealthy) environment, bosh_name, bosh_uuid, bosh_deployment, bosh_job_name, bosh_job_id, bosh_job_index, bosh_job_az, bosh_job_ip, bosh_job_process_name
metrics.namespace_job_process_uptime_seconds BOSH Job Process Uptime in seconds environment, bosh_name, bosh_uuid, bosh_deployment, bosh_job_name, bosh_job_id, bosh_job_index, bosh_job_az, bosh_job_ip, bosh_job_process_name
metrics.namespace_job_process_cpu_total BOSH Job Process CPU Total environment, bosh_name, bosh_uuid, bosh_deployment, bosh_job_name, bosh_job_id, bosh_job_index, bosh_job_az, bosh_job_ip, bosh_job_process_name
metrics.namespace_job_process_mem_kb BOSH Job Process Memory KB environment, bosh_name, bosh_uuid, bosh_deployment, bosh_job_name, bosh_job_id, bosh_job_index, bosh_job_az, bosh_job_ip, bosh_job_process_name
metrics.namespace_job_process_mem_percent BOSH Job Process Memory Percent environment, bosh_name, bosh_uuid, bosh_deployment, bosh_job_name, bosh_job_id, bosh_job_index, bosh_job_az, bosh_job_ip, bosh_job_process_name
metrics.namespace_last_jobs_scrape_timestamp Number of seconds since 1970 since last scrape of Job metrics from BOSH environment, bosh_name, bosh_uuid
metrics.namespace_last_jobs_scrape_duration_seconds Duration of the last scrape of Job metrics from BOSH environment, bosh_name, bosh_uuid

The exporter returns the following ServiceDiscovery metrics:

Metric Description Labels
metrics.namespace_last_service_discovery_scrape_timestamp Number of seconds since 1970 since last scrape of Service Discovery from BOSH environment, bosh_name, bosh_uuid
metrics.namespace_last_service_discovery_scrape_duration_seconds Duration of the last scrape of Service Discovery from BOSH environment, bosh_name, bosh_uuid

Service Discovery

If the ServiceDiscovery collector is enabled, the exporter will write a json file at the sd.filename location containing a list of static configs that can be used with the Prometheus file-based service discovery mechanism:

[
  {
    "targets": ["10.244.0.12"],
    "labels":
      {
        "__meta_bosh_job_process_name": "bosh_exporter"
      }
  },
  {
    "targets": ["10.244.0.11", "10.244.0.12", "10.244.0.13", "10.244.0.14"],
    "labels":
      {
        "__meta_bosh_job_process_name": "node_exporter"
      }
  }
]

The list of targets can be filtered using the sd.processes_regexp flag.

Contributing

Refer to the contributing guidelines.

License

Apache License 2.0, see LICENSE.

Voir le readme

Not Found

Voir le readme

Cloud Foundry Java Client

The cf-java-client project is a Java language binding for interacting with a Cloud Foundry instance. The project is broken up into a number of components which expose different levels of abstraction depending on need.

  • cloudfoundry-client – Interfaces, request, and response objects mapping to the Cloud Foundry REST APIs. This project has no implementation and therefore cannot connect a Cloud Foundry instance on its own.
  • cloudfoundry-client-spring – The default implementation of the cloudfoundry-client project. This implementation is based on the Spring Framework RestTemplate.
  • cloudfoundry-operations – An API and implementation that corresponds to the Cloud Foundry CLI operations. This project builds on the cloudfoundry-cli and therefore has a single implementation.
  • cloudfoundry-maven-plugin / cloudfoundry-gradle-plugin – Build plugins for Maven and Gradle. These projects build on cloudfoundry-operations and therefore have single implementations.

Most projects will need two dependencies; the Operations API and an implementation of the Client API. For Maven, the dependencies would be defined like this:

<dependencies>
    <dependency>
        <groupId>org.cloudfoundry</groupId>
        <artifactId>cloudfoundry-client-spring</artifactId>
        <version>${cf-java-client.version}</version>
    </dependency>
    <dependency>
        <groupId>org.cloudfoundry</groupId>
        <artifactId>cloudfoundry-operations</artifactId>
        <version>${cf-java-client.version}</version>
    </dependency>
    ...
</dependencies>

The artifacts can be found in the Spring release and snapshot repositories:

<repositories>
    <repository>
        <id>spring-releases</id>
        <name>Spring Releases</name>
        <url>http://repo.spring.io/release</url>
    </repository>
    ...
</repositories>
<repositories>
    <repository>
        <id>spring-snapshots</id>
        <name>Spring Snapshots</name>
        <url>http://repo.spring.io/snapshot</url>
        <snapshots>
            <enabled>true</enabled>
        </snapshots>
    </repository>
    ...
</repositories>

For Gradle, the dependencies would be defined like this:

dependencies {
    compile "org.cloudfoundry:cloudfoundry-client-spring:$cfJavaClientVersion"
    compile "org.cloudfoundry:cloudfoundry-operations:$cfJavaClientVersion"
    ...
}

The artifacts can be found in the Spring release and snapshot repositories:

repositories {
    maven { url "http://repo.spring.io/release" }
    ...
}
repositories {
    maven { url "http://repo.spring.io/snapshot" }
    ...
}

Usage

Both the cloudfoundry-operations and cloudfoundry-client projects follow a "Reactive" design pattern and expose their responses with Reactive Streams Publishers. The choice to expose Reactive Streams Publishers gives the project interoperability with the various reactive framework implementations such as Project Reactor and RxJava. In the examples that follow, Project Reactor is used, but all reactive frameworks work similarly.

CloudFoundryClient and CloudFoundryOperations Builders

The lowest-level building block of the API is a CloudFoundryClient. This is only an interface and the default implementation of this is the SpringCloudFoundryClient. To instantiate one, you configure it with a builder:

SpringCloudFoundryClient.builder()
    .host("api.run.pivotal.io")
    .username("example-username")
    .password("example-password")
    .build();

In Spring-based applications, you'll want to encapsulate this in a bean definition:

@Bean
CloudFoundryClient cloudFoundryClient(@Value("${cf.host}") String host,
                                      @Value("${cf.username}") String username,
                                      @Value("${cf.password}") String password) {
    return SpringCloudFoundryClient.builder()
            .host(host)
            .username(username)
            .password(password)
            .build();
}

The CloudFoundryClient provides direct access to the raw REST APIs. This level of abstraction provides the most detailed and powerful access to the Cloud Foundry instance, but also requires users to perform quite a lot of orchestration on their own. Most users will instead want to work at the CloudFoundryOperations layer. Once again this is only an interface and the default implementation of this is the DefaultCloudFoundryOperations. To instantiate one, you configure it with a builder:

new CloudFoundryOperationsBuilder()
    .cloudFoundryClient(cloudFoundryClient)
    .target("example-organization", "example-space")
    .build();

In Spring-based applications, you'll want to encapsulate this in a bean definition as well:

@Bean
CloudFoundryOperations cloudFoundryOperations(CloudFoundryClient cloudFoundryClient,
                                              @Value("${cf.organization}") String organization,
                                              @Value("${cf.space}") String space) {
    return new CloudFoundryOperationsBuilder()
            .cloudFoundryClient(cloudFoundryClient)
            .target(organization, space)
            .build();
}

CloudFoundryOperations APIs

Once you've got a reference to the CloudFoundryOperations, it's time to start making calls to the Cloud Foundry instance. One of the simplest possible operations is list all of the organizations the user is a member of. The following example does three things:

  1. Requests a list of all organizations
  2. Extracts the name of each organization
  3. Prints the name of the each organization to System.out
Streams
    .wrap(this.cloudFoundryOperations.organizations().list())
    .map(Organization::getName)
    .consume(System.out::println);

To relate the example to the description above the following happens:

  1. Streams.wrap(...) – Wraps the Reactive Streams Publisher (an interoperability type) in the Reactor-native Stream type
  2. .map(...) – Maps an input type to an output type. This example uses a method a reference and the equivalent lambda would look like organization -> organization.getName().
  3. consume... – The terminal operation that consumes each item in the stream. Again, this example uses a method reference and the the equivalent lambda would look like name -> System.out.println(name).

CloudFoundryClient APIs

As mentioned earlier, the cloudfoundry-operations implementation builds upon the cloudfoundry-client API. That implementation takes advantage of the same reactive style in the lower-level API. The implementation of the Organizations.list() method (which was demonstrated above) looks like the following (roughly):

ListOrganizationsRequest request = ListOrganizationsRequest.builder()
    .page(1)
    .build();

Streams
    .wrap(cloudFoundryClient.organizations().list(request))
    .flatMap(response -> Streams.from(response.getResources))
    .map(resource -> {
        return Organization.builder()
            .id(resource.getMetadata().getId())
            .name(resource.getEntity().getName())
            .build();
    });

The above example is more complicated:

  1. Streams.wrap(...) – Wraps the Reactive Streams Publisher in the Reactor-native Stream type
  2. .flatMap(...) – substitutes the original stream with a stream of the Resources returned by the requested page
  3. .map(...) – Maps the Resource to an Organization type

Maven Plugin

TODO: Document once implemented

Gradle Plugin

TODO: Document once implemented

Development

The project depends on Java 8 but is built to be Java 7 compatible. To build from source and install to your local Maven cache, run the following:

$ ./mvnw clean install

To run the the integration tests, run the following:

$ ./mvnw -Pintegration-test clean test

IMPORTANT Integration tests should be run against an empty Cloud Foundry instance. The integration tests are destructive, nearly everything on an instance given the chance.

The integration tests require a running instance of Cloud Foundry to test against. We recommend using MicroPCF to start a local instance to test with. To configure the integration tests with the appropriate connection information use the following environment variables:

Name Description
TEST_HOST The host of Cloud Foundry instance. Typically something like api.local.micropcf.io.
TEST_ORGANIZATION The default organization to use for testing
TEST_PASSWORD The test user's password
TEST_SKIPSSLVALIDATION Whether to skip SSL validation when connecting to the Cloud Foundry instance. Typically true when connecting to a MicroPCF instance.
TEST_SPACE The default space to use for testing
TEST_USERNAME The test user's username

Contributing

Pull requests and Issues are welcome.

License

This project is released under version 2.0 of the Apache License.



cf-ssh cf-uaa-guard-service cf-webui cf-zsh-autocompletion
SSH into a running container for your Cloud Foundry application, run one-off tasks, debug your app, and more. UAA proxy as a service Single page CloudFoundry web user interface using AngularJS and Bootstrap Oh My Zsh tab completion / autocompletion for cloud foundry
Voir le readme

cf-ssh

SSH into a running container for your Cloud Foundry application, run one-off tasks, debug your app, and more.

Initial implementation requires the application to have a manifest.yml.

Also, cf-ssh requires that you run the command from within the project source folder. It performs a cf push to create a new application based on the same source code/path, buildpack, and variables. Once CF Runtime supports copy app bits #78847148, then cf-ssh will be upgraded to use app bit copying, and not require local access to project app bits.

It is desired that cf-ssh works correctly from all platforms that support the cf CLI.

Windows is a target platform but has not yet been tested. Please give feedback in the Issues.

Requirements

This tool requires the following CLIs to be installed

It is assumed that in using cf-ssh you have already successfully targeted a Cloud Foundry API, and have pushed an application (successfully or not).

This tool also currently requires outbound internet access to the http://tmate.io/ proxies. In future, to avoid the requirement of public Internet access, it would be great to package up the tmate server as a BOSH release and deploy it into the same infrastructure as the Cloud Foundry deployment.

Why require ssh CLI?

This project is written in the Go programming language, and there is a candidate library go.crypto that could have natively supported an interactive SSH session. Unfortunately, the SSL supports a subset of ciphers that don't seem to work with tmate.io proxies [stackoverflow]

Using the go.crypto library I was getting the following error. In future, perhaps either tmate.io or go.crypto will change to support each other.

unable to connect: ssh: handshake failed: ssh: no common algorithms

Installation

Download a pre-compiled release for your platform. Place it in your $PATH or %PATH% and rename to cf-ssh (or cf-ssh.exe for Windows).

Alternately, if you have Go setup you can build it from source:

go get github.com/cloudfoundry-community/cf-ssh

Usage

cd path/to/app
cf-ssh -f manifest.yml

Publish releases

To generate the pre-compiled executables for the target platforms, using gox:

gox -output "out/{{.Dir}}_{{.OS}}_{{.Arch}}" -osarch "darwin/amd64 linux/amd64 windows/amd64 windows/386" ./...

They are now in the out folder:

-rwxr-xr-x  1 drnic  staff   4.0M Oct 25 23:05 cf-ssh_darwin_amd64
-rwxr-xr-x  1 drnic  staff   4.0M Oct 25 23:05 cf-ssh_linux_amd64
-rwxr-xr-x  1 drnic  staff   3.4M Oct 25 23:05 cf-ssh_windows_386.exe
-rwxr-xr-x  1 drnic  staff   4.2M Oct 25 23:05 cf-ssh_windows_amd64.exe
VERSION=v0.1.0
github-release release -u cloudfoundry-community -r cf-ssh -t $VERSION --name "cf-ssh $VERSION" --description 'SSH into a running container for your Cloud Foundry application, run one-off tasks, debug your app, and more.'

for arch in darwin_amd64 linux_amd64 windows_amd64 windows_386; do
  github-release upload -u cloudfoundry-community -r cf-ssh -t $VERSION --name cf-ssh_$arch --file out/cf-ssh_$arch*
done
Voir le readme

UAA Auth Route Service Build Status

(Based on https://github.com/benlaplanche/cf-basic-auth-route-service)

Using the new route services functionality available in Cloud Foundry, you can now bind applications to routing services. Traffic sent to your application is routed through the bound routing service before continuing onto your service.

This allows you to perform actions on the HTTP traffic, such as enforcing authentication, rate limiting or logging.

For more details see:

Getting Started

There are two components and thus steps to getting this up and running. The broker and the filtering proxy.

Before getting started you will need:

  • Access to a cloud foundry deployment
  • UAA client credentials

Uncomment and fill in the environment variables required as the sample in broker-manifest.yml.sample and copy the manifest to broker-manifest.yml.

Run cf push -f broker-manifest.yml to deploy the uaa-guard-proxy app.

Uncomment and fill in the environment variables required as the sample in proxy-manifest.yml.sample and copy the manifest to proxy-manifest.yml.

Run cf push -f proxy-manifest.yml to deploy the uaa-guard-proxy app.

Once the broker is deployed, you can register it:

cf create-service-broker \
    uaa-auth-broker \
    $GUARD_BROKER_USERNAME \
    $GUARD_BROKER_PASSWORD \
    https://uaa-guard-broker.my-paas.com \
    --space-scoped

Once you've created the service broker, you must enable-service-access in order to see it in the marketplace.

cf enable-service-access uaa-auth

You should now be able to see the service in the marketplace if you run cf marketplace

Protecting an application with UAA authentication

Now you have setup the supporting components, you can now protect your application with auth!

First create an instance of the service from the marketplace, here we are calling our instance authy

$cf create-service uaa-auth uaa-auth authy

Next, identify the application and its URL which you wish to protect. Here we have an application called hello with a URL of https://hello.my-paas.com

Then you need to bind the service instance you created called authy to the hello.my-paas.com route

⇒  cf bind-route-service my-paas.com authy --hostname hello

Binding may cause requests for route hello.my-paas.com to be altered by service instance authy. Do you want to proceed?> y
Binding route hello.my-paas.com to service instance authy in org org / space space as admin...
OK

You can validate the route for hello is now bound to the authy service instance

⇒  cf routes
Getting routes for org org / space space as admin ...

space          host                domain            port   path   type   apps                service
space          hello               my-paas.com                            hello               authy

All of that looks good, so the last step is to validate we can no longer view the hello application without providing credentials!

⇒  curl -k https://hello.my-paas.com
Unauthorized

and if you visit it you will be redirected to UAA.

Knowing who is logged in

This service will forward a header X-AUTH-USER with the email of the logged in user.

Voir le readme

CF WebUI

CF WebUI is a modern single-page web-frontend for Cloud Foundry based on AngularJS and Bootstrap.

Cloud Foundry is the OpenSource Platform as a Service (PaaS) Framework on which many PaaS offerings are based (e.g. Pivotal Web Services, HP Helion, IBM BlueMix, Swisscom Application Cloud, etc.). It allows the developers to provide, manage and scale their application in the cloud very easily and quickly. For end-users Cloud Foundry provides a REST based API and a command line interface (CLI) client. No official free and open source web front-end is currently available.

Getting started

1. Clone the project: git clone https://github.com/icclab/cf-webui <br>
2. Change directory to cf-webUI: cd cf-webUI<br>
3. Change the manifest.yml to your options and the endpoint to your desired Cloud Foundry instance. E.g.: <br>

    ---
    applications:  
    - name: cf-webui  
      memory: 128M  
      host: console-cf-webui-${random-word}  
      path: ./build
      buildpack: staticfile_buildpack
      env: 
        API_ENDPOINT: https://api.run.pivotal.io
        # Use Google DNS by default
        NGINX_RESOLVER: 8.8.8.8
        #Enforce https is used (using x_forwarded_proto check) .Default: enabled
        FORCE_HTTPS: 1

4. Install npm packages: npm install<br>
5. Build the application using Grunt: grunt build<br>
6. Push this application to Cloud Foundry using the cf Command Line Interface (CLI): cf push.<br>
7. Enjoy the CF WebUI!<br>

Disclaimer

The current version is an early release (alpha). It is not yet production-ready. Some features are still to come and it may contain major bugs.

Community & Support

Please report bugs and request features using GitHub Issues. For additional information, you can contact the maintainer directly.

Community discussions about CF-WebUI happen in the CF-WebUI-discuss mailing list. Once you subscribe to the list, you can send mail to the list address: icclab-cf-webui@dornbirn.zhaw.ch. The mailing list archives are also available on the web.

Please follow the ICCLab blog for updates.

License

CF-WebUI is licensed under the Apache License version 2.0. See the LICENSE file.

Voir le readme

cf-zsh-autocompletion

Oh My Zsh (or probably any zsh but YMMV) plugin for cf (Cloud Foundry) autocompletion.

See the know issues below for what doesn't work.

Future

Now that the CLI supports plugins, I'm considering abandoning this project in favor of a true CLI plugin.

Installation

Drop the cf directory into your $ZSH/custom/plugins/ (usually ~/.oh-my-zsh/custom/plugins) directory. Then add cf to the plugins line of your .zshrc file. For example here's my .zshrc plugin lines

# Which plugins would you like to load? (plugins can be found in ~/.oh-my-zsh/plugins/*)
# Custom plugins may be added to ~/.oh-my-zsh/custom/plugins/
# Example format: plugins=(rails git textmate ruby lighthouse)
# Add wisely, as too many plugins slow down shell startup.
plugins=(git docker jsontools tmux vagrant bosh cf)

Runtime Options

Personally I think the short hand options to many CF commands clutter up the tab view, so I don't include them in the default output. If you want them included in the output export CF_ZSH_INCLUDE_SHORT=true. The plugin will look for this variable every time so if you want to play with it you an set it on the command line. Otherwise, stick it in your .zshrc had have at it.

Example

Type cf <tab> and watch the magic happen

➜  ~  cf <tab>                                                                      
zsh: do you wish to see all 120 possibilities (60 lines)? y                                                
api                                     passwd
app                                     plugins
apps                                    purge-service-offering
auth                                    push
bind-running-security-group             quota
... and on and on

➜  ~  cf create-<tab>                                                                                                  
create-buildpack              create-security-group         create-space
create-domain                 create-service                create-space-quota
create-org                    create-service-auth-token     create-user
create-quota                  create-service-broker         create-user-provided-service
create-route                  create-shared-domain

Known Issues

It doesn't provide extended help for commands, which would be nice. For instance when you type cf push <tab> you don't get the usage.

It doesn't know about parameters for every command yet. It will prompt with spaces, orgs and apps for some commands.

El Problemo?

Open an issue or submit a PR please!

Tracker

Is avaliable here



cfplayground cg-deck cli-plugin-repo cloudfoundry-cli
Web portal for CF, lets users try out CF with free temp account and interactive tutorials A web console for managing Cloud Foundry Apps Public repository for community created CF CLI plugins. A CLI for Cloud Foundry written in Go
Voir le readme

CF Playground

The goal of this project is to provide an easily accessible environment for users who want to experience Cloud Foundry, without having to setup and to learn the operation of the platform. CF Playground provides an interactive tutorial.

CF Playground in Action

Setting up CF Playground

The following instruction is for OSX/Linux, windows support is coming soon. You will need to host your own Cloud Foundry environment (bosh-lite or any full deployment)

1) Ensure that Go version 1.2+ is installed on the system

2) Setup the GOPATH

export GOPATH=~/go
export PATH=$GOPATH/bin:$PATH

3) Download CF PLayground

go get github.com/cloudfoundry-community/cfplayground
cd $GOPATH/src/github.com/cloudfoundry-community/cfplayground

*(Ignore any warnings about "no buildable Go source files")

4) Create a config file config.json under config/ with the info of your Cloud Foundry environment, a sample config file is provided for reference config/sameple_config.json

* if no config.json is found, boshlite_config.json will be used to target a local boshlite environment.

5) Run CF Playground

go run main.go
  • If you are running CF Playground under Linux, download Linux CF CLI Binary, rename and replace the pcf file under assets/cf/ with the downloaded binary.

Limitation

  • No Windows support (coming soon)
  • Arbitrary app pushing (functioning, improvement to be made)
  • Temp user account/space clean up (work in progress)
  • Restore user session (functioning, improvement to be made)
  • The supported CF commands are:
    • cf push
    • cf apps
    • cf app {app name}
    • cf delete {app name}
Voir le readme

18F Cloud Foundry Deck

Build Status

Tech Stack

  • Go (v1.5 required) for the backend server. Go Code Coverage Status

  • AngularJS for the frontend. JS Code Coverage Status

Setup

Create a Client with UAAC

  • Make sure UAAC is installed.
  • Target your UAA server. uaac target <uaa.your-domain.com>
  • Login with your current UAA account. uaac token client get <your admin account> -s <your uaa admin password>
  • Create client account:
    uaac client add <your-client-id> \
    --authorities cloud_controller.admin,cloud_controller.read,cloud_controller.write,openid,scim.read \
    --authorized_grant_types authorization_code,client_credentials,refresh_token \
    --scope cloud_controller.admin,cloud_controller.read,cloud_controller.write,openid,scim.read \
    -s <your-client-secret>
  • Unable to create an account still? Troubleshoot here

Set the environment variables

If you are testing locally, export these variables. If you are deploying to cloud foundry, modify the manifest.yml

  • CONSOLE_CLIENT_ID: Registered client id with UAA.
  • CONSOLE_CLIENT_SECRET: The client secret.
  • CONSOLE_HOSTNAME: The URL of the service itself.
  • CONSOLE_LOGIN_URL: The base URL of the auth service. i.e. https://login.domain.com
  • CONSOLE_UAA_URL: The URL of the UAA service. i.e. https://uaa.domain.com
  • CONSOLE_API_URL: The URL of the API service. i.e. http://api.domain.com
  • CONSOLE_LOG_URL: The URL of the loggregator service. i.e. http://loggregator.domain.com
  • PPROF_ENABLED: An optional variable. If set to true or 1, will turn on /debug/pprof endpoints as seen here

Front end

Install front end dependencies

npm install

Running locally

  • Make sure all of your environment variables are set as mentioned above.
  • Install godep
  • Run godep restore to get all third party code
  • go run server.go
  • Navigate browser to http://localhost:9999

Unit Testing

Running Go unit tests

  • go test ./...

Running Angular unit tests

Test can then be run with the command:

npm run tests

To get a viewable coverage report change the coverageReport object in karma.conf.js from json to html

coverageReporter: {
    type: 'html',
    dir: 'coverage',
    subdir: '.'
}

Acceptance Tests

This project currently uses a combination of Agouti + Ginkgo + Gomega to provide BDD acceptance testing. All the acceptance tests are in the 'acceptance' folder.

Setup

  • Make sure you have PhantomJS installed: brew install phantomjs
  • Install aogut: go get github.com/sclevine/agouti
  • Install ginkgo go get github.com/onsi/ginkgo/ginkgo
  • Install gomega go get github.com/onsi/gomega
  • To run locally, in addition to the variables in the "Set the environmnent variables" section, you will need to set two more variables in your environment
  • CONSOLE_TEST_USERNAME: The username of the account you want the tests to use to login into your CONSOLE_LOGIN_URL
  • CONSOLE_TEST_PASSWORD: The password of the account you want the tests to use to login into your CONSOLE_LOGIN_URL
  • CONSOLE_TEST_ORG_NAME: The test organization the user should be navigating to.
  • CONSOLE_TEST_SPACE_NAME: The test space the user should be navigating to.
  • CONSOLE_TEST_APP_NAME: The test app the user should be navigating to.
  • CONSOLE_TEST_HOST: The host that the app can create a mock route for.
  • CONSOLE_TEST_DOMAIN: The domain for the mock route.

Running acceptance tests

  • cd acceptance && go test -tags acceptance

Deploying

  • cf push <optional-app-name>

CI

This project uses Travis-CI

  • The following environment variables need to be set in plain text in the global env section:
    • CONSOLE_API_URL, CONSOLE_UAA_URL, CONSOLE_LOG_URL, CONSOLE_LOGIN_URL, CONSOLE_HOSTNAME="http://localhost:9999", CONSOLE_TEST_ORG_NAME, CONSOLE_TEST_SPACE_NAME, and CONSOLE_TEST_APP_NAME
  • In case you fork this project for your own use (no need to do this if forking to make a pull request), you will need to use the Travis-CI CLI tool to re-encrypt all the environment variables.
    • travis encrypt CONSOLE_CLIENT_ID='<your client id>' --add env.global
    • travis encrypt CONSOLE_CLIENT_SECRET='<your client secret>' --add env.global
    • travis encrypt CONSOLE_TEST_PASSWORD='<the test user account password>' --add env.global
    • travis encrypt CONSOLE_TEST_USERNAME='<the test user account username>' --add env.global
    • travis encrypt CF_USERNAME='<the user account username used to deploy>' --add env.global
    • travis encrypt CF_PASSWORD='<the user account password used to deploy>' --add env.global

What’s a Deck?

From Wikipedia:

The Sprawl trilogy (also known as the Neuromancer, Cyberspace, or Matrix trilogy) is William Gibson's first set of novels, composed of Neuromancer (1984), Count Zero (1986), and Mona Lisa Overdrive (1988).

Cyberspace Deck

Also called a "deck" for short, it is used to access the virtual representation of the matrix. The deck is connected to a tiara-like device that operates by using electrodes to stimulate the user's brain while drowning out other external stimulation. As Case describes them, decks are basically simplified simstim units.

Voir le readme

Cloud Foundry CLI Plugin Repository (CLIPR)Build Status

This is a public repository for community created CF CLI plugins. To submit your plugin approval, please submit a pull request according to the guidelines below.

*If you are looking for information about the Plugin Repo Server, please go here

Submitting Plugins

  1. You need to have git installed
  2. Clone this repo git clone https://github.com/cloudfoundry-incubator/cli-plugin-repo
  3. Include your plugin information in repo-index.yml, here is an example of a new plugin entry

    - name: new_plugin
    description: new_plugin to be made available for the CF community
    version: 1.0.0
    created: 2015-1-31
    updated: 2015-1-31
    company:
    authors:
    - name: Sample-Author
      homepage: http://github.com/sample-author
      contact: contact@sample-author.io
    homepage: http://github.com/sample-author/new_plugin
    binaries:
    - platform: osx 
      url: https://github.com/sample-author/new_plugin/releases/download/v1.0.0/echo_darwin
      checksum: 2a087d5cddcfb057fbda91e611c33f46
    - platform: win64 
      url: https://github.com/sample-author/new_plugin/releases/download/v1.0.0/echo_win64.exe
      checksum: b4550d6594a3358563b9dcb81e40fd66
    - platform: linux32
      url: https://github.com/sample-author/new_plugin/releases/download/v1.0.0/echo_linux32
      checksum: f6540d6594a9684563b9lfa81e23id93

    Please make sure the spacing and colons are correct in the entry. The following descibes each field's usage.

    Field Description
    name Name of your plugin, must not conflict with other existing plugins in the repo.
    description Describe your plugin in a line or two. This desscription will show up when your plugin is listed on the command line
    version Version number of your plugin, in [major].[minor].[build] form
    created Date of first submission of the plugin, in year-month-day form
    updated Date of last update of the plugin, in year-month-day form
    company Optional field detailing company or organization that created the plugin
    authors Fields to detail the authors of the plugin
    name: name of author
    homepage: Optional link to the homepage of the author
    contact: Optional ways to contact author, email, twitter, phone etc ...
    homepage Link to the homepage where the source code is hosted. Currently we only support open source plugins
    binaries This section has fields detailing the various binary versions of your plugin. To reach as large an audience as possible, we encourage contributors to cross-compile their plugins on as many platforms as possible. Go provides everything you need to cross-compile for different platforms
    platform: The os for this binary. Supports osx, linux32, linux64, win32, win64
    url: Link to the binary file itself
    checksum: SHA-1 of the binary file for verification
  4. After making the changes, fork the repository
  5. Add your fork as a remote

    cd $GOPATH/src/github.com/cloudfoundry-incubator/cli-plugin-repo
    git remote add your_name https://github.com/your_name/cli-plugin-repo
  6. Push the changes to your fork and submit a Pull Request

Running your own Plugin Repo Server

Included as part of this repository is the CLI Plugin Repo (CLIPR), a reference implementation of a repo server. For information on how to run CLIPR or how to write your own, please see the CLIPR documentation here.

Voir le readme

Cloud Foundry CLI Build Status

This is the official command line client for Cloud Foundry.

You can follow our development progress on Pivotal Tracker.

Getting Started

Download and run the installer for your platform from the Downloads Section.

Once installed, you can log in and push an app.

$ cd [my-app-directory]
$ cf api api.[my-cloudfoundry].com
Setting api endpoint to https://api.[my-cloudfoundry].com...
OK

$ cf login
API endpoint: https://api.[my-cloudfoundry].com

Email> [my-email]

Password> [my-password]
Authenticating...
OK

$ cf push

Further Reading and Getting Help

  • You can find further documentation at the docs page for the CLI here.
  • There is also help available in the CLI itself; type cf help for more information.
  • Each command also has help output available via cf [command] --help or cf [command] -h.
  • For development guide on writing a cli plugin, see here.
  • Finally, if you are still stuck or have any questions or issues, feel free to open a GitHub issue.

Downloads

Latest stable: Download the installer or compressed binary for your platform:

Mac OS X 64 bit Windows 64 bit Linux 64 bit
Installers pkg zip rpm / deb
Binaries tgz zip tgz

From the command line: Download examples with curl for Mac OS X and Linux

# ...download & extract Mac OS X binary
$ curl -L "https://cli.run.pivotal.io/stable?release=macosx64-binary&source=github" | tar -zx
# ...or Linux binary
$ curl -L "https://cli.run.pivotal.io/stable?release=linux64-binary&source=github" | tar -zx
# ...and confirm you got the version you expected
$ ./cf --version
cf version x.y.z-...

Experimental: Install CF for OSX through Homebrew via the pivotal's homebrew-tap:

$ brew tap pivotal/tap
$ brew install cloudfoundry-cli

Also, edge binaries are published for Mac OS X 64 bit, Windows 64 bit and Linux 64 bit with each new 'push' that passes though CI. These binaries are not intended for wider use; they're for developers to test new features and fixes as they are completed.

Releases: 32 bit releases and information about all our releases can be found here

Troubleshooting / FAQs

Known Issues

  • .cfignore used in cf push must be in UTF8 encoding for CLI to interpret correctly.

Linux

Filing Bugs

For simple bugs (eg: text formatting, help messages, etc), please provide
  • the command you ran
  • what occurred
  • what you expected to occur
For bugs related to HTTP requests or strange behavior, please run the command with env var CF_TRACE=true and provide
  • the command you ran
  • the trace output
  • a high-level description of the bug
For panics and other crashes, please provide
  • the command you ran
  • the stack trace generated (if any)
  • any other relevant information

Forking the repository for development

  1. Install Go
  2. Ensure your $GOPATH is set correctly
  3. Install godep
  4. Get the cli source code: go get github.com/cloudfoundry/cli
    • (Ignore any warnings about "no buildable Go source files")
  5. Run godep restore (note: this will modify the dependencies in your $GOPATH)
  6. Fork the repository
  7. Add your fork as a remote: cd $GOPATH/src/github.com/cloudfoundry/cli && git remote add your_name https://github.com/your_name/cli

Building

To prepare your build environment, run go get -u github.com/jteeuwen/go-bindata/...

  1. Run ./bin/build
  2. The binary will be built into the ./out directory.

Optionally, you can use bin/run to compile and run the executable in one step.

If you want to run the tests with ginkgo, or build with go build you should first run bin/generate-language-resources. bin/build and bin/test generate language files automatically.

Developing

  1. Install Mercurial
  2. Run go get golang.org/x/tools/cmd/vet
  3. Write a Ginkgo test.
  4. Run bin/test and watch the test fail.
  5. Make the test pass.
  6. Submit a pull request to the master branch.

*__ For development guide on writing a cli plugin, see here**

Contributing

Major new feature proposals are given as a publically viewable google document with commenting allowed and discussed on the cf-dev mailing list.

Pull Requests

Pull Requests should be made against the master branch.

Architecture overview

A command is a struct that implements this interface:

type Command interface {
    MetaData() CommandMetadata
    SetDependency(deps Dependency, pluginCall bool) Command
    Requirements(requirementsFactory requirements.Factory, context flags.FlagContext) (reqs []requirements.Requirement, err error)
    Execute(context flags.FlagContext)
}

Source code

Metadata() is just a description of the command name, usage and flags:

type CommandMetadata struct {
    Name            string
    ShortName       string
    Usage           string
    Description     string
    Flags           map[string]flags.FlagSet
    SkipFlagParsing bool
    TotalArgs       int
}

Source code

Requirements() returns a list of requirements that need to be met before a command can be invoked.

Execute() is the method that your command implements to do whatever it's supposed to do. The context object provides flags and arguments.

When the command is run, it communicates with api using repositories (they are in cf/api).

SetDependency() is where a command obtains its dependencies. Dependencies are typically declared as an interface type, and not a concrete type, so tests can inject a fake. The bool argument pluginCall indicates whether the command is invoked by one of the CLI's plugin API methods.

Dependencies are injected into each command, so tests can inject a fake. This means that dependencies are typically declared as an interface type, and not a concrete type. (see cf/command_registry/dependency.go)

Some dependencies are managed by a repository locator in cf/api/repository_locator.go.

Repositories communicate with the api endpoints through a Gateway (see cf/net).

Models are data structures related to Cloud Foundry (see cf/models). For example, some models are apps, buildpacks, domains, etc.

Managing dependencies

Command dependencies are managed by the command registry package. The app uses the package (in cf/command_registry/dependency.go)to instantiate them, this allows not sharing the knowledge of their dependencies with the app itself.

For commands that use another command as dependency, command_registry is used for retrieving the command dependency. For example, the command restart has a dependency on command start and stop, and this is how the command dependency is retrieved: restart.go

As for repositories, we use the repository locator to handle their dependencies. You can find it in cf/api/repository_locator.go.

Example command

Create Space is a good example of a command. Its tests include checking arguments, requiring the user to be logged in, and the actual behavior of the command itself. You can find it in cf/commands/space/create_space.go.

i18n

All pull requests which include user-facing strings should include updated translation files. These files are generated/ maintained using i18n4go.

To add/ update translation strings run the command i18n4go -c fixup. For each change or update, you will be presented with the choices new or upd. Type in the appropriate choice. If upd is chosen, you will be asked to confirm which string is being updated using a numbered list.

Current conventions

Creating Commands

Resources that include several commands have been broken out into their own sub-package using the Resource name. An example of this convention is the Space resource and package (see cf/commands/space)

In addition, command file and methods naming follows a CRUD like convention. For example, the Space resource includes commands such a CreateSpace, ListSpaces, DeleteSpace, etc.

Creating Repositories

Although not ideal, we use the name "Repository" for API related operations as opposed to "Service". Repository was chosen to avoid confusion with Service model objects (i.e. creating Services and Service Instances within Cloud Foundry).

By convention, Repository methods return a model object and an error. Models are used in both Commands and Repositories to model Cloud Foundry data. This convention provides a consistent method signature across repositories.



codimd codimd-container concourse ECOMSport
CodiMD - Realtime collaborative markdown notes on all platforms. CodiMD container image ressources BOSH release and development workspace for Concourse
Voir le readme

CodiMD

Standard - JavaScript Style Guide

Join the chat at https://gitter.im/hackmdio/hackmd #CodiMD on matrix.org build status version POEditor

CodiMD lets you create real-time collaborative markdown notes on all platforms. Inspired by Hackpad, with more focus on speed and flexibility, and build from HackMD source code. Feel free to contribute.

Thanks for using! :smile:

Table of Contents

HackMD CE became CodiMD

CodiMD was recently renamed from its former name was HackMD. CodiMD is the free software version of HackMD. It was the original Version of HackMD. The HackMD team initiated CodiMD and provided a solid code base. Due to the need of paying bills, A fork was created and called HackMD EE, which is a SaaS (Software as a Service) product available at hackmd.io.

We decided to change the name to break the confusion between HackMD and CodiMD, formally known as HackMD CE, as it never was an open core project.

Just to more confusion: We are still friends with HackMD :heart:

For the whole renaming story, see the related issue

Browsers Requirement

  • Chrome Chrome >= 47, Chrome for Android >= 47
  • Safari Safari >= 9, iOS Safari >= 8.4
  • Firefox Firefox >= 44
  • IE IE >= 9, Edge >= 12
  • Opera Opera >= 34, Opera Mini not supported
  • Android Browser >= 4.4

Installation

Getting started (Native install)

Prerequisite

  • Node.js 6.x or up (test up to 7.5.0) and <10.x
  • Database (PostgreSQL, MySQL, MariaDB, SQLite, MSSQL) use charset utf8
  • npm (and its dependencies, especially uWebSockets, node-gyp)
  • For building CodiMD we recommend to use a machine with at least 2GB RAM

Instructions

  1. Download a release and unzip or clone into a directory
  2. Enter the directory and type bin/setup, which will install npm dependencies and create configs. The setup script is written in Bash, you would need bash as a prerequisite.
  3. Setup the configs, see more below
  4. Setup environment variables which will overwrite the configs
  5. Build front-end bundle by npm run build (use npm run dev if you are in development)
  6. Modify the file named .sequelizerc, change the value of the variable url with your db connection string For example: postgres://username:password@localhost:5432/codimd
  7. Run node_modules/.bin/sequelize db:migrate, this step will migrate your db to the latest schema
  8. Run the server as you like (node, forever, pm2)

Heroku Deployment

You can quickly setup a sample Heroku CodiMD application by clicking the button below.

Deploy on Heroku

If you deploy it without the button, keep in mind to use the right buildpacks. For details check app.json.

Kubernetes

To install use helm install stable/hackmd.

For all further details, please check out the offical CodiMD K8s helm chart.

CodiMD by docker container

Try in PWD

Debian-based version:

latest

Alpine-based version:

alpine

The easiest way to setup CodiMD using docker are using the following three commands:

git clone https://github.com/hackmdio/codimd-container.git
cd codimd-container
docker-compose up

Read more about it in the container repository…

Cloudron

Install CodiMD on Cloudron:

Install

Upgrade

Native setup

If you are upgrading CodiMD from an older version, follow these steps:

  1. Fully stop your old server first (important)
  2. git pull or do whatever that updates the files
  3. npm install to update dependencies
  4. Build front-end bundle by npm run build (use npm run dev if you are in development)
  5. Modify the file named .sequelizerc, change the value of the variable url with your db connection string For example: postgres://username:password@localhost:5432/codimd
  6. Run node_modules/.bin/sequelize db:migrate, this step will migrate your db to the latest schema
  7. Start your whole new server!
  • migrate-to-1.1.0

We deprecated the older lower case config style and moved on to camel case style. Please have a look at the current config.json.example and check the warnings on startup.

Notice: This is not a breaking change right now but in the future

We don't use LZString to compress socket.io data and DB data after version 0.5.0. Please run the migration tool if you're upgrading from the old version.

We've dropped MongoDB after version 0.4.0. So here is the migration tool for you to transfer the old DB data to the new DB. This tool is also used for official service.

Configuration

There are some config settings you need to change in the files below.

./config.json      ----application settings

Environment variables (will overwrite other server configs)

variables example values description
NODE_ENV production or development set current environment (will apply corresponding settings in the config.json)
DEBUG true or false set debug mode; show more logs
CMD_CONFIG_FILE /path/to/config.json optional override for the path to CodiMD's config file
CMD_DOMAIN codimd.org domain name
CMD_URL_PATH codimd sub URL path, like www.example.com/<URL_PATH>
CMD_HOST localhost host to listen on
CMD_PORT 80 web app port
CMD_PATH /var/run/codimd.sock path to UNIX domain socket to listen on (if specified, CMD_HOST and CMD_PORT are ignored)
CMD_ALLOW_ORIGIN localhost, codimd.org domain name whitelist (use comma to separate)
CMD_PROTOCOL_USESSL true or false set to use SSL protocol for resources path (only applied when domain is set)
CMD_URL_ADDPORT true or false set to add port on callback URL (ports 80 or 443 won't be applied) (only applied when domain is set)
CMD_USECDN true or false set to use CDN resources or not (default is true)
CMD_ALLOW_ANONYMOUS true or false set to allow anonymous usage (default is true)
CMD_ALLOW_ANONYMOUS_EDITS true or false if allowAnonymous is true, allow users to select freely permission, allowing guests to edit existing notes (default is false)
CMD_ALLOW_FREEURL true or false set to allow new note creation by accessing a nonexistent note URL
CMD_DEFAULT_PERMISSION freely, editable, limited, locked or private set notes default permission (only applied on signed users)
CMD_DB_URL mysql://localhost:3306/database set the database URL
CMD_SESSION_SECRET no example Secret used to sign the session cookie. If non is set, one will randomly generated on startup
CMD_SESSION_LIFE 1209600000 Session life time. (milliseconds)
CMD_FACEBOOK_CLIENTID no example Facebook API client id
CMD_FACEBOOK_CLIENTSECRET no example Facebook API client secret
CMD_TWITTER_CONSUMERKEY no example Twitter API consumer key
CMD_TWITTER_CONSUMERSECRET no example Twitter API consumer secret
CMD_GITHUB_CLIENTID no example GitHub API client id
CMD_GITHUB_CLIENTSECRET no example GitHub API client secret
CMD_GITLAB_SCOPE read_user or api GitLab API requested scope (default is api) (GitLab snippet import/export need api scope)
CMD_GITLAB_BASEURL no example GitLab authentication endpoint, set to use other endpoint than GitLab.com (optional)
CMD_GITLAB_CLIENTID no example GitLab API client id
CMD_GITLAB_CLIENTSECRET no example GitLab API client secret
CMD_GITLAB_VERSION no example GitLab API version (v3 or v4)
CMD_MATTERMOST_BASEURL no example Mattermost authentication endpoint for versions below 5.0. For Mattermost version 5.0 and above, see guide.
CMD_MATTERMOST_CLIENTID no example Mattermost API client id
CMD_MATTERMOST_CLIENTSECRET no example Mattermost API client secret
CMD_DROPBOX_CLIENTID no example Dropbox API client id
CMD_DROPBOX_CLIENTSECRET no example Dropbox API client secret
CMD_GOOGLE_CLIENTID no example Google API client id
CMD_GOOGLE_CLIENTSECRET no example Google API client secret
CMD_LDAP_URL ldap://example.com URL of LDAP server
CMD_LDAP_BINDDN no example bindDn for LDAP access
CMD_LDAP_BINDCREDENTIALS no example bindCredentials for LDAP access
CMD_LDAP_SEARCHBASE o=users,dc=example,dc=com LDAP directory to begin search from
CMD_LDAP_SEARCHFILTER (uid={{username}}) LDAP filter to search with
CMD_LDAP_SEARCHATTRIBUTES displayName, mail LDAP attributes to search with (use comma to separate)
CMD_LDAP_USERIDFIELD uidNumber or uid or sAMAccountName The LDAP field which is used uniquely identify a user on CodiMD
CMD_LDAP_USERNAMEFIELD Fallback to userid The LDAP field which is used as the username on CodiMD
CMD_LDAP_TLS_CA server-cert.pem, root.pem Root CA for LDAP TLS in PEM format (use comma to separate)
CMD_LDAP_PROVIDERNAME My institution Optional name to be displayed at login form indicating the LDAP provider
CMD_SAML_IDPSSOURL https://idp.example.com/sso authentication endpoint of IdP. for details, see guide.
CMD_SAML_IDPCERT /path/to/cert.pem certificate file path of IdP in PEM format
CMD_SAML_ISSUER no example identity of the service provider (optional, default: serverurl)"
CMD_SAML_IDENTIFIERFORMAT no example name identifier format (optional, default: urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress)
CMD_SAML_GROUPATTRIBUTE memberOf attribute name for group list (optional)
CMD_SAML_REQUIREDGROUPS Hackmd-users group names that allowed (use vertical bar to separate) (optional)
CMD_SAML_EXTERNALGROUPS Temporary-staff group names that not allowed (use vertical bar to separate) (optional)
CMD_SAML_ATTRIBUTE_ID sAMAccountName attribute map for id (optional, default: NameID of SAML response)
CMD_SAML_ATTRIBUTE_USERNAME mailNickname attribute map for username (optional, default: NameID of SAML response)
CMD_SAML_ATTRIBUTE_EMAIL mail attribute map for email (optional, default: NameID of SAML response if CMD_SAML_IDENTIFIERFORMAT is default)
CMD_OAUTH2_USER_PROFILE_URL https://example.com where retrieve information about a user after succesful login. Needs to output JSON. (no default value) Refer to the Mattermost or Nextcloud examples for more details on all of the CMD_OAUTH2... options.
CMD_OAUTH2_USER_PROFILE_USERNAME_ATTR name where to find the username in the JSON from the user profile URL. (no default value)
CMD_OAUTH2_USER_PROFILE_DISPLAY_NAME_ATTR display-name where to find the display-name in the JSON from the user profile URL. (no default value)
CMD_OAUTH2_USER_PROFILE_EMAIL_ATTR email where to find the email address in the JSON from the user profile URL. (no default value)
CMD_OAUTH2_TOKEN_URL https://example.com sometimes called token endpoint, please refer to the documentation of your OAuth2 provider (no default value)
CMD_OAUTH2_AUTHORIZATION_URL https://example.com authorization URL of your provider, please refer to the documentation of your OAuth2 provider (no default value)
CMD_OAUTH2_CLIENT_ID afae02fckafd... you will get this from your OAuth2 provider when you register CodiMD as OAuth2-client, (no default value)
CMD_OAUTH2_CLIENT_SECRET afae02fckafd... you will get this from your OAuth2 provider when you register CodiMD as OAuth2-client, (no default value)
CMD_OAUTH2_PROVIDERNAME My institution Optional name to be displayed at login form indicating the oAuth2 provider
CMD_IMGUR_CLIENTID no example Imgur API client id
CMD_EMAIL true or false set to allow email signin
CMD_ALLOW_PDF_EXPORT true or false Enable or disable PDF exports
CMD_ALLOW_EMAIL_REGISTER true or false set to allow email register (only applied when email is set, default is true. Note bin/manage_users might help you if registration is false.)
CMD_ALLOW_GRAVATAR true or false set to false to disable gravatar as profile picture source on your instance
CMD_IMAGE_UPLOAD_TYPE imgur, s3, minio or filesystem Where to upload images. For S3, see our Image Upload Guides for S3 or Minio
CMD_S3_ACCESS_KEY_ID no example AWS access key id
CMD_S3_SECRET_ACCESS_KEY no example AWS secret key
CMD_S3_REGION ap-northeast-1 AWS S3 region
CMD_S3_BUCKET no example AWS S3 bucket name
CMD_MINIO_ACCESS_KEY no example Minio access key
CMD_MINIO_SECRET_KEY no example Minio secret key
CMD_MINIO_ENDPOINT minio.example.org Address of your Minio endpoint/instance
CMD_MINIO_PORT 9000 Port that is used for your Minio instance
CMD_MINIO_SECURE true If set to true HTTPS is used for Minio
CMD_AZURE_CONNECTION_STRING no example Azure Blob Storage connection string
CMD_AZURE_CONTAINER no example Azure Blob Storage container name (automatically created if non existent)
CMD_HSTS_ENABLE true set to enable HSTS if HTTPS is also enabled (default is true)
CMD_HSTS_INCLUDE_SUBDOMAINS true set to include subdomains in HSTS (default is true)
CMD_HSTS_MAX_AGE 31536000 max duration in seconds to tell clients to keep HSTS status (default is a year)
CMD_HSTS_PRELOAD true whether to allow preloading of the site's HSTS status (e.g. into browsers)
CMD_CSP_ENABLE true whether to enable Content Security Policy (directives cannot be configured with environment variables)
CMD_CSP_REPORTURI https://<someid>.report-uri.com/r/d/csp/enforce Allows to add a URL for CSP reports in case of violations

Note: Due to the rename process we renamed all HMD_-prefix variables to be CMD_-prefixed. The old ones continue to work.

Application settings config.json

variables example values description
debug true or false set debug mode, show more logs
domain localhost domain name
urlPath codimd sub URL path, like www.example.com/<urlpath>
host localhost host to listen on
port 80 web app port
path /var/run/codimd.sock path to UNIX domain socket to listen on (if specified, host and port are ignored)
allowOrigin ['localhost'] domain name whitelist
useSSL true or false set to use SSL server (if true, will auto turn on protocolUseSSL)
hsts {"enable": true, "maxAgeSeconds": 31536000, "includeSubdomains": true, "preload": true} HSTS options to use with HTTPS (default is the example value, max age is a year)
csp {"enable": true, "directives": {"scriptSrc": "trustworthy-scripts.example.com"}, "upgradeInsecureRequests": "auto", "addDefaults": true} Configures Content Security Policy. Directives are passed to Helmet - see their documentation for more information on the format. Some defaults are added to the configured values so that the application doesn't break. To disable this behaviour, set addDefaults to false. Further, if usecdn is on, some CDN locations are allowed too. By default (auto), insecure (HTTP) requests are upgraded to HTTPS via CSP if useSSL is on. To change this behaviour, set upgradeInsecureRequests to either true or false.
protocolUseSSL true or false set to use SSL protocol for resources path (only applied when domain is set)
urlAddPort true or false set to add port on callback URL (ports 80 or 443 won't be applied) (only applied when domain is set)
useCDN true or false set to use CDN resources or not (default is true)
allowAnonymous true or false set to allow anonymous usage (default is true)
allowAnonymousEdits true or false if allowAnonymous is true: allow users to select freely permission, allowing guests to edit existing notes (default is false)
allowFreeURL true or false set to allow new note creation by accessing a nonexistent note URL
defaultPermission freely, editable, limited, locked, protected or private set notes default permission (only applied on signed users)
dbURL mysql://localhost:3306/database set the db URL; if set, then db config (below) won't be applied
db { "dialect": "sqlite", "storage": "./db.codimd.sqlite" } set the db configs, see more here
sslKeyPath ./cert/client.key SSL key path1 (only need when you set useSSL)
sslCertPath ./cert/codimd_io.crt SSL cert path1 (only need when you set useSSL)
sslCAPath ['./cert/COMODORSAAddTrustCA.crt'] SSL ca chain1 (only need when you set useSSL)
dhParamPath ./cert/dhparam.pem SSL dhparam path1 (only need when you set useSSL)
tmpPath ./tmp/ temp directory path1
defaultNotePath ./public/default.md default note file path1
docsPath ./public/docs docs directory path1
viewPath ./public/views template directory path1
uploadsPath ./public/uploads uploads directory1 - needs to be persistent when you use imageUploadType filesystem
sessionName connect.sid cookie session name
sessionSecret secret cookie session secret
sessionLife 14 * 24 * 60 * 60 * 1000 cookie session life
staticCacheTime 1 * 24 * 60 * 60 * 1000 static file cache time
heartbeatInterval 5000 socket.io heartbeat interval
heartbeatTimeout 10000 socket.io heartbeat timeout
documentMaxLength 100000 note max length
email true or false set to allow email signin
oauth2 {baseURL: ..., userProfileURL: ..., userProfileUsernameAttr: ..., userProfileDisplayNameAttr: ..., userProfileEmailAttr: ..., tokenURL: ..., authorizationURL: ..., clientID: ..., clientSecret: ...} An object detailing your OAuth2 provider. Refer to the Mattermost or Nextcloud examples for more details!
allowEmailRegister true or false set to allow email register (only applied when email is set, default is true. Note bin/manage_users might help you if registration is false.)
allowGravatar true or false set to false to disable gravatar as profile picture source on your instance
imageUploadType imgur, s3, minio, azure or filesystem(default) Where to upload images. For S3, see our Image Upload Guides for S3 or Minio
minio { "accessKey": "YOUR_MINIO_ACCESS_KEY", "secretKey": "YOUR_MINIO_SECRET_KEY", "endpoint": "YOUR_MINIO_HOST", port: 9000, secure: true } When imageUploadType is set to minio, you need to set this key. Also checkout our Minio Image Upload Guide
s3 { "accessKeyId": "YOUR_S3_ACCESS_KEY_ID", "secretAccessKey": "YOUR_S3_ACCESS_KEY", "region": "YOUR_S3_REGION" } When imageuploadtype be set to s3, you would also need to setup this key, check our S3 Image Upload Guide
s3bucket YOUR_S3_BUCKET_NAME bucket name when imageUploadType is set to s3 or minio

1: relative paths are based on CodiMD's base directory

Third-party integration API key settings

service settings location description
facebook, twitter, github, gitlab, mattermost, dropbox, google, ldap, saml environment variables or config.json for signin
imgur, s3, minio, azure environment variables or config.json for image upload
dropbox(dropbox/appKey) config.json for export and import

Third-party integration OAuth callback URLs

service callback URL (after the server URL)
facebook /auth/facebook/callback
twitter /auth/twitter/callback
github /auth/github/callback
gitlab /auth/gitlab/callback
mattermost /auth/mattermost/callback
dropbox /auth/dropbox/callback
google /auth/google/callback
saml /auth/saml/callback

Developer Notes

Structure

codimd/
├── tmp/            --- temporary files
├── docs/           --- document files
├── lib/            --- server libraries
└── public/         --- client files
    ├── css/        --- css styles
    ├── js/         --- js scripts
    ├── vendor/     --- vendor includes
    └── views/      --- view templates

Operational Transformation

From 0.3.2, we started supporting operational transformation. It makes concurrent editing safe and will not break up other users' operations. Additionally, now can show other clients' selections. See more at http://operational-transformation.github.io/

License

License under AGPL.

Voir le readme

CodiMD container

Build Status #CodiMD on matrix.org Gitter Try in PWD

Debian based version:

Alpine based version:

Prerequisite

See more here: https://docs.docker.com/

Usage

Get started

  1. Install docker and docker-compose, "Docker for Windows" or "Docker for Mac"
  2. Run git clone https://github.com/hackmdio/codimd-container.git
  3. Change to the directory codimd-container directory
  4. Run docker-compose up in your terminal
  5. Wait until see the log HTTP Server listening at port 3000, it will take few minutes based on your internet connection.
  6. Open http://127.0.0.1:3000

Update

Start your docker and enter the terminal, follow below commands:

cd codimd-contianer ## enter the directory
git pull ## pull new commits
docker-compose pull ## pull new containers
docker-compose up ## turn on

Migrate from docker-hackmd

If you used the docker-hackmd repository before, migrating to codimd-container is easy.

Since codimd-container is basically a fork of docker-hackmd, all you need to do is replacing the upstream URL.

git remote set-url origin https://github.com/hackmdio/codimd-container.git
git pull

Now you can follow the regular update steps.

migration-to-0.5.0

We don't use LZString to compress socket.io data and DB data after version 0.5.0. Please run the migration tool if you're upgrading from the old version.

  1. Stop your CodiMD containers
  2. Modify docker-compose.yml, add expose ports 5432 to hackmdPostgres
  3. docker-compose up to start your codimd containers
  4. Backup DB (see below)
  5. Git clone above migration-to-0.5.0 and npm install (see more on above link)
  6. Modify config.json in migration-to-0.5.0, change its username, password and host to your docker
  7. Run migration (see more on above link)
  8. Stop your codimd containers
  9. Modify docker-compose.yml, remove expose ports 5432 in hackmdPostgres
  10. git pull in codimd-container, update to version 0.5.0 (see below)

Backup

Start your docker and enter the terminal, follow below commands:

 docker-compose exec database pg_dump hackmd -U hackmd  > backup.sql

Restore

Similar to backup steps, but last command is

cat backup.sql | docker exec -i $(docker-compose ps -q database) psql -U hackmd

Kubernetes

To install use helm install stable/hackmd.

For all further details, please check out the offical HackMD K8s helm chart.

Custom build

The default setting would use pre-build docker image, if you want to build your own containers uncomment the build section in the docker-compose.yml and edit the config.json.

If you change the database settings and don't use the HMD_DB_URL make sure you edit the .sequelizerc.

License

View license information for the software contained in this image.

Supported Docker versions

This image is officially supported on Docker version 17.03.1-CE.

Support for older versions (down to 1.12) is provided on a best-effort basis.

Please see the Docker installation documentation for details on how to upgrade your Docker daemon.

User Feedback

Issues

If you have any problems with or questions about this image, please contact us through a GitHub issue.

You can also reach many of the project maintainers via our #codimd:matrix.org or the hackmd channel on Gitter.

Contributing

You are invited to contribute new features, fixes, or updates, large or small; we are always thrilled to receive pull requests, and do our best to process them as fast as we can.

Happy CodiMD :smile:

Voir le readme

concourse slack.concourse.ci

Concourse is a pipeline-based CI system written in Go.

Contributing

Concourse is built on a few components, all written in Go with cutesy aerospace-themed names. This repository is actually its BOSH release, which ties everything together and also serves as the central hub for GitHub issues.

Each component has its own repository:

  • ATC is most of Concourse: it provides the API, web UI, and all pipeline orchestration
  • Fly is the CLI for interacting with and configuring Concourse pipelines
  • TSA is a SSH server used for authorizing worker registration
  • Garden is a generic interface for orchestrating containers remotely on a worker
  • Baggageclaim is a server for managing caches and artifacts on the workers

To learn more about how they fit together, see Concourse Architecture.

Voir le readme

Not Found





Membre d'organisations