May 30, 2014

JFrog joins the Cloud Foundry Foundation to help open source integration leap forward!

We are excited to join the Cloud Foundry Foundation as a Silver member!

Well, that was a natural move for the frogs. As a true believers in OSS, Integration and cloud as a platform it is just natural for us to join the team and other members of the foundation and offer our value and experience to the community.

JFrog streamlining the software development process by introducing smart and agile techniques of binaries and software packages management. We were founded around an OpenSource product - Artifactory, which now has over 20K installations around the world. Artifactory’s cloud-based offering serves and sponsors many OSS projects and offer the service free of charge to support open source users.

Projects including Pivotal’s Spring, Grails are using Artifactory SaaS version in the last years, and our newest cloud-based social project - Bintray, is serving the OSS community by proving a free distribution platform for projects, including Pivotal’s Spring and Groovy.

So yes, it was just natural for the frogs and we are looking forward to driving the OpenSource community even further forward, collaborating around cloud-based binary management solutions on the cloud foundry platform.

JFrog Co-founder and Chief Architect, Fred Simon was quoted saying: "Driven by Open Source and committed to the community, JFrog has always seen itself as a key influencer in promoting platforms for managing, packing and distributing artifacts," said Fred Simon, Co-founder and Chief Architect of JFrog. "Through Artifactory and Bintray, JFrog already serves the SpringSource, Grails and Groovy communities, we are now see ourselves even more committed as we join the Cloud Foundry Foundation."

April 9, 2014

Private npm Registry With Artifactory

npm registry
The main reason for Node‘s explosive popularity is its thriving ecosystem. Likewise, it’s well understood that the main reason for that ecosystem’s growth is npm, Node’s package manager. usage has skyrocketed with statistics showing over 4 Million packages downloaded a day, and over 68,000 packages publicly available, and the numbers just keep going up. In fact, node.js and npm are now growing at twice the rate of any other software platform today.
Packages per day across popular platforms. (Source:

With great power, comes great responsibility


I wish that was true, but I’m afraid like any other big growing system, you should expect growing pains.
Which means that if your builds are relying on, you are entering a world of pain.

The solution should be easy enough:
“The easiest way is to replicate the couch database, and use the same (or similar) design doc to implement the APIs.” (

Personally, I wouldn’t call that easy, not to mention that it’s a waste of resources:

  • Why would you want to periodically replicate the entire CouchDB when you only need the packages your build uses. Those packages should be lazy-cached on demand! 
  • You now need someone to administer this CouchDB instead of using an out-of-the-box solution. 
  • What about aggregating multiple registries? You’re out of luck there since npm doesn’t currently support multiple registries.
  • What about the security model? You should be able to control who has access to what, and the current security model doesn’t allow you to do that. 

Meet Artifactory, with npm support!

So to answer the needs detailed above, here is what Artifactory can offer:

The basic stuff:

  • Remote repositories to proxy remote npm registries - The most important one would be the registry, but this can be applied to any compatible npm registry. Provides lazy on-demand caching for packages and metadata. 
  • Local repositories to store private npm packages - Easily store and share private npm packages using what we call “Local Repositories”. These packages can be shared easily and safely among internal teams that need them. 

But that’s not all. There’s much more to it when using a smart binary repository manager:

  • Virtual Repositories - No need for the npm client to support multiple registries. Simply define a virtual repository which aggregates the local repositories that contain your in-house packages, and the remote repositories that proxy or any other compatible npm repository. 
  • Authentication and authorization - An enhanced security model which gives you full control over who can download or publish what to where.
  • Searches (including npm search) - Use the inherent npm search command, or utilize Artifactory’s powerful search capabilities such as searching by property or checksum and more.
  • Powerful custom user plugins platform - Enormous flexibility to customize how you work with npm packages. The sky’s the limit. 

So, are you ready to start using Artifactory with npm support? The full documentation is available in our user guide.

You can register to download and install your free 30-day evaluation locally, or use the cloud version with Artifactory Online.

April 3, 2014

Power to the People - Customize and Extend Artifactory with User Plugins

From our experience with thousands of Artifactory users, we know one thing for sure: we don't know better. Every organization does its ALM differently: artifact approval flow, snapshot retention policies, build-to-release flow, governance, required metadata and much, much more - each organization is different. We definitely have some ideas on how the build and deploy process should look, but there are so many things that make your process unique. And that's good. After all, you aren't paid for working within the ideal deployment cycle, but rather for solving a business problem. At least we hope so.

Acknowledging the fact that we don't know better complicates our lives as creators of a binary repository... and not only by hurting our ego. We want to give you the perfect tool for the job, but how can we do it without dictating to you what your job is? The solution is well known - extensions, a.k.a. add-ons, user plugins, you name it.

"OMG!", you might say. "Code! Joy-joy! Finally, an excuse to hack around!" Or "OMG! Code! It's your job to code those things into your product, not mine!" Look, either way, we don't have much choice, do we? When it comes to customization, you have to tell Artifactory what you want it to do. We can only do our best to make it simple for you. So, we developed a simple DSL.

In this post, I'll show you how easy it is to customize Artifactory with user plugins. Here's the story: you want to prevent the download of deprecated artifacts. The deprecation information is attached as a set of  custom properties to the artifacts by some quality-assurance mechanism (or organism).

Let's say, for example, the artifacts to be banned from download are annotated with property deprecated=true. Artifactory allows you to code callbacks that will be executed in response to various events in the system. You can find the list of available callbacks in the User Plugins documentation. So, we are going to write a download plugin and the callback we are looking for is the altResponse. In this callback, we can provide an alternative response instead of the one Artifactory was asked for. Here's the code:

 1 download {
 2     altResponse { request, responseRepoPath ->
 3         def deprecated = repositories.getProperties(responseRepoPath).getFirst('deprecated')
 4         if (deprecated && deprecated.toBoolean()) {
 5             status = 403
 6             message = 'This artifact was deprecated, please use some alternative.'
 7             log.warn "Request was made for deprecated artifact: $responseRepoPath.";
 8         }
 9     }
10 }

10 lines of code. That’s all. Let's examine them. First thing to notice: it's Groovy! If you are into it, good for you, enjoy! If you aren't, don't worry. It's almost like Java, so you'll read it without problems and will be productive from day 0.
So, here we go, line by line:
  1. Declares that it's a download plugin.
  2. Defines the callback type we want (altResponse). When we implement the alternative response, Artifactory provides us with 2 objects:
    • The request, an instance of org.artifactory.request.Request. It encapsulates the information about the incoming request, such as client details and the information requested
    • And responseRepoPath, an instance of org.artifactory.repo.RepoPath. It encapsulates the information about the artifact to be returned.
  3. We want the first value of the 'deprecated' property, if defined on the artifact represented by responseRepoPath.
  4. If the value exists and it is 'true', 1 or 'y' (as declared by Groovy's toBoolean())
  5. set return code to 403 (Forbidden) and
  6. set the correct error message and
  7. optionally, issue a warning to the Artifactory log.
Well, that's all. Now you can see that the dragon of user plugins isn't so scary. Just think about the unique ways you can automate your delivery cycle, apply regulations and checks, or provide your corporate users with a better Artifactory experience. Here are some samples and community-contributed plugins to ignite your imagination.

Enjoy your build!

This is an updated repost of an old and forgotten post, now featuring the latest plugins API!

December 5, 2013

Introducing First Class RubyGems Support in Artifactory

Here's a short and down-to-business screen-cast that shows how to set up a feature-rich hosted Ruby Gems repository. You'll get the full monty - local repositories for sharing your private gems, remote repositories to stop being dependent on and a virtual repository that unifies and simplifies configuration. Of course, it plays awesomely with Jenkins, (by using Jenkins Artifactory Plugin) including the release management functionality.

Make yourself a cup of coffee and spend 6 minutes to get a clue on how powerful Ruby binary management can be.

Your comments and thoughts are welcomed, both here and/or on YouTube. Still not sure why do you need it? Read more.

August 22, 2013

Taking Control of App Releases

Featuring report "Release Management for Enterprises", by RebelLabs

Release Management: More Relevant Than Ever

Today’s software users have rapidly evolving needs, are mobile, and expect 24/7 connectivity and reliability. So dev teams need to churn out new features and versions frequently to keep up while still making sure that service is not interrupted. Sounds like a tall order, but fail to do so, and users switch to competitors or other alternatives.

The dev teams have a multitude of collaboration tools, Kanban boards, build tools and agile practices, and can build features quickly. But once it comes to releasing these changes, the process is more manual, ad hoc and slow. This is why it is important to take a close look at our release processes, streamline them, and take software to users quickly and safely.

Step 1: Tear Down Walls and Collaborate

The primary issue around streamlining release processes in an enterprise is, you guessed it, cultural differences and team fragmentation. Complexity of software projects and the need for specialists have greatly contributed to creating silos. The dev teams focus on develop new features while the ops teams ensure service reliability. Their goals are opposed to each other, thereby driving a wedge between them and further breaking down lines of communication.

Collaboration is vital. Tear down walls and work together on:
  • Critiquing your processes. Look for dependencies and bottlenecks. Address them.
  • Exploring tools for automation. Automate repetitive and error-prone processes.
  • Driving cultural change. Communicate successes and celebrate them.
Look for small opportunities that can bring big improvements. This is the kind of awesome stuff DevOps is made up of :).

Step 2: The Fun Part - Automate!

That's right. Once you have optimized release processes, torn down the fences and felt good doing it, look for opportunities to automate.

Check out this 32 page report by RebelLabs that shows you how you can create an automated release pipeline from scratch using:

  • GitHub: version control system (VCS) where devs check in code
  • Bamboo: CI tool that pulls changes from the VCS to generate builds
  • Arquillian: to run integrated tests on newly generated builds 
  • Selenium: to run acceptance tests on newly generated builds
  • Artifactory: repository to store release artifacts, and build and test results
  • LiveRebel: release automation tool to deploy app updates with zero downtime
The image below is a snapshot of how these freely available tools work together.

Try it out with a new project and practice continuous delivery with all the checks and balances that come with proven release management practices.

Your Next Step

Get the report, read the discussion around release management issues and solutions, and then take control by building your own continuous delivery pipeline!

August 13, 2013

Does Ruby Need a Mature Binary Repository?

At some point in time, a Ruby developer realized the need to serve gems within a private network. The main reasons why:
  • You can't rely on
  • You need a place to host the gems is not available in RubyGems. Those can be of two flavors:
    • Something not hosted at RubyGems. For example, Vagrant.
    • Something internal (neither open source nor public); something you want to share with your colleagues, but as a gem rather than source.
Then the question arises - where can those gems be stored? Natural answer: Git(Hub) or any other source control. So...

Source control is the way to share files, right? No, it isn't.

Of course, source control works just fine for sources - we’ve used it for ages. But gems aren't source, they are binaries.  And source control is, well, for sources. So what? Why won't it fit? Some reasons include:
  • First and foremost - a version control system (VCS) is not a gem repository! It can't calculate indexes on the server and it doesn't support any dynamic REST API, such as the dependency resolution API used by Bundler (which makes resolution much faster).
  • Versioning mismatch. Source files are versioned by their content. VCSs know how to differentiate them and understand what changed. Binaries, on the other side, are usually versioned by their name. From the VCS point of view, they are different entries, each one without any version history.
  • Some very popular VCSs (like Subversion) can’t obliterate files. That means - once a file is added, it stays in the repository forever. That’s not a big issue for small source files, but can become quite a pain when it comes to obsolete, large binaries.
  • Source control knows how to search sources. And, of course, the most important type of search is by content. However, searching for binaries is different: what matters most is the file metadata, the location, structure of the file name and, in case of archived artifact, the contents of the archive.
  • The permissions scheme of VCSs is tailored for versioning sources (again!). For example, there is no override permission. That’s because overriding sources is something we do all the time in VCS - it’s the same security level as, let's say, adding a new source file. However, the situation is very different with binaries. While adding new binaries is fine, overriding released binary is something that shouldn’t be done (one should have a special permission for it).
  • Distributed VCSs (yes, Git, I am looking at you!) are awesome by themselves, but particularly unsuited for handling big binary files. When cloning a remote repository to your machine, you are bringing all the history of its files. Now just think about all the huge binaries sitting there...
By now, you should be convinced that source control isn’t a good place for binaries. What we actually need is an installable RubyGems server! And guess what? There are a couple of options:

Go get yourself a RubyGems server

  • Gem in a box is a Sinatra application that provides, well, a gems server. It's nice, but a bit naive: no built-in authentication, no authorization, no repositories separation, and no other servers (i.e. proxy.
  • GemFury is a very basic, subscription-based cloud-hosted gems server. You get a private repository, protected with an obscure URL. Again, pretty basic stuff here - no proxy for (or any other repo), no authentication model for collaborators, and no virtual aggregation of repositories in case you have more than one.
What can I say? The Ruby universe is not very sophisticated when it comes to managing binaries - and that's OK (after all, Ruby is about source, usually open source). But there’s something the Ruby community can borrow from the “dark Java Enterprise” side - the proper binary repository. And we have one to offer...

Welcome to the dark side and see the light. Meet Artifactory, with RubyGems support:

Let's start with the basics. The binary repository serves two main goals:
  1. Proxy of remote RubyGems repositories. First and foremost,; but also any instance of GemFury, Gem in a Box, etc. out there. These are called “remote repositories.”
  2. Deployment target for your gems. Everything you don't want to put on for any reason, and everything you need but other repositories don't have. These are called “local repositories.”
On top of that we add:
  • Virtual repositories to aggregate any number of remote, local and virtual repositories under a single URL.
  • Authentication and authorization schemes which allow controlling permissions on repositories per user and/or group, including integration with external authorization services.
  • Searching and browsing hosted and remote gems.
  • REST API with Info, Search, Dependencies list and Yank commands.
  • Powerful user plugin framework.
You can get all of this goodness installed on your servers or in the cloud with Artifactory Online, where JFrog will babysit it, upgrade it, and keep it running.

How can you begin using Artifactory with RubyGems support? Simple! The full documentation is available in our user guide:
  1. Install Artifactory on your server (RPM or just an unzipped folder) or get your own instance in a cloud.
  2. Set up some repositories:
    1. Set up a proxy.
    2. Create some local repositories for your gems.
    3. Aggregate them under virtual repositories.
  3. Set up your client to work with the virtual repository you created by running the “gem source” command.
  4. Enjoy your build using the tools you are used to, e.g. Bundler.
You are more than welcome to give Artifactory with RubyGems support a try today - download it or create a cloud instance. We will appreciate your feedback.

Welcome aboard!

August 5, 2013

wOwSCON 2013

I've been a part of the swamp for over a year now and managed to learn that being a part of the community means much more than sitting in front of the computer and writing code all day long. One of the ways JFrog stays in touch with the community is attending conferences and I had a great pleasure to be part of the JFrog team at OSCon 2013.
This is the 1st year JFrog had it's own booth in the exhibition presenting both Artifactory and Bintray for the open source crowd.

We arrived to Portland convention center on July 23rd. Our main goal that day was the exhibition reception. We had a really great kickoff and it only got better in the next couple of days. The amount of people and traffic amazed me even during sessions.

The conference had about 4000 attendees some open source users and some are not. You can see the variety of users yourself:

Taken from OSCon 2013 Recap Report

The technical background of the attendees was also very diverse (PHP, Java, Paython, Ruby and such). It's nice to converse about different distribution methods, binaries management and spread the word about our products. Eventually, we got many ideas and feedback from users, which helps us to stay community oriented.

BTW you can be sure that any one who came to the booth got our cool JFrog T-shirt ("Yo Adrian, I built it").

Besides the booth, I got the chance to attend some cool lectures:
1. Garrett Smith - CloudBees - Solving Embarrassingly Obvious Problems in Code. The lecture was about writing code properly following this two steps:
- Make a code which is working
- Clean the code. Make it simple and readable. 
Code should be obvious to those who read it and test protected to those who change it.

2. Hans Dockter - Gradle - Continuos Delivery with Large Software Stacks. Hans discussed the common delivery patterns:
- Binary Snapshots
- Branching
- Single Build
Presented each build method pro and con.
At the end, he presented the Water gates method:
- Separate component builds
- Separate VCS repos
- Latest promoted
- Build is downstream aware
Great lecture to understand different flows and integrations, learn the difference between them and the impact each got on your working process.

3. Clinton Gormley - Elastic Search - To Infinity and Beyond - Storing your Moose herd in Elasticsearch.
Using Elastic Search with none transactional DB for full text search. How to index the data, use analyzers instead of wildcards and properly define and use the elastic search.

We managed to put our marks and logos on different places:

OSCon was a really great place to meet some new interesting people, learn much more about the community and view what's really going on behind the code.

Presentations from the conference can be found here:

P.S. here's a nice puzzle for you (click to enlarge):