Breaking Down Code Llama: Meta's New Open-Source AI Model

Good morning and welcome back to another edition of Full Stack Express, your weekly newsletter covering tech articles, news, case studies and tips up and down the tech stack.

We’ve got a jam-packed newsletter this week, so let’s jump right in:

  • Breaking Down Code Llama: Meta's New Open-Source AI Model

  • How cellular architecture transformed Slack’s architecture

  • Why Stripe’s Latest API update is a game-changer for payments

NOTABLE ANNOUNCEMENTS

  • Microsoft releases TypeScript 5.2 with many new features such as decorator metadata and copying array methods.

  • Mozilla announces new MDN front-end developer curriculum.

  • The Electron team releases version 26.0.0, with upgrades to Chromium, V8, and Node.js.

  • Bun 1.0 is launching on September 7. Register for the live-streamed event here.

BREAKING DOWN CODE LLAMA: META’S NEW OPEN-SOURCE AI MODEL

Facebook. Instagram. Whatsapp. Metaverse.

Meta is now making its mark in the AI arms race with the new release of Code Llama.

Meta AI - Code Llama

Here are the key takeaways:

  • Capable of generating code, and natural language about code, while accepting both types of prompts

  • Free for both research and commercial use

  • Built on top of Llama 2 and available in three models:

    • Code Llama, foundational code model

    • Code Llama - Python, specialized for Python

    • Code Llama - Instruct, fine-tuned for understanding natural language instructions

The objective for Code Llama is not only to help make workflows faster and more efficient for current developers, but to also lower the barrier to entry for people who are learning to code.

Three sizes of Code Llama will be released, 7B, 13B, and 34B. Each of these models is trained with 500B tokens of code and code-related data, while also excelling in different areas.

Code Llama Models

The smaller models are faster and more suitable for tasks that require low latency, like real-time code completion. These models have been trained with fill-in-the-middle (FIM) capability, allowing them to insert code into existing code.

On the other hand, the 34B model will return the best results and give better coding assistance.

Code Llama - Python

How does Code Llama stack up against existing solutions?

It crushes most of them, even GPT 3.5, but still lower than GPT 4.

Pretty awesome for open-source right?

Performance benchmarks

Try Code Llama today: GitHub repository.

Find the full in-depth analysis here.

HOW CELLULAR ARCHITECTURE TRANSFORMED SLACK’S INFRASTRUCTURE

With so many businesses relying on Slack, uptime is extremely important and every incident needs to be reviewed.

This is exactly what Slack does after each notable service outage and as it turns out, detecting failures in distributed systems is a hard problem.

For example, a single Slack API request from a user (e.g. loading messages in a channel) may fan out into hundreds of RPCs to service backends, each of which must complete to return a correct response to the user.

Slack’s original architecture

Slack’s service frontends are continuously attempting to detect and exclude failed backends, but some failures need to be recorded before a failed server can be excluded.

So how does Slack deal with these gray failures and limit the blast radius of site failures?

By treating AZs as cells where traffic can be drained and siloing services.

Slack’s new siloed architecture

All services are present in all AZs, but each service only communicates with services within its AZ. That way, failure of a system within an AZ is contained, and traffic may be dynamically routed away to avoid those failures.

Find the full in-depth analysis here.

WHY STRIPE’S LATEST API UPDATE IS A GAME-CHANGER FOR PAYMENTS

Did you know that buy now, pay later methods now account for more than $300 billion in transactions worldwide?

Furthermore, mobile wallets accounted for roughly half of global ecommerce payment transactions just last year.

With the landscape of payment methods changing so rapidly, how does Stripe streamline the integration process?

Writing code for a payments interface typically requires three specific types of properties:

  • Capabilities of your payments integration (e.g. supporting redirects)

  • Variable properties of a given payment (e.g. amount or currency)

  • Numerous configurable properties (e.g. don’t show Affirm for transactions less than $50)

A parameter like paymentMethodTypes, which typically contains hard-coded configurable properties, can easily cause payment failures due to these nuanced limitations.

Some examples include:

  • Transaction minimums and maximums

  • Currency limitations

  • Merchant category restrictions

  • Differences in recurring payment support

As a result implementing a new payment methods requires you to:

  1. Know all the specific limitations for that payment method

  2. Encode complex logic to hide/display payment methods based on transaction limitations

So how does Stripe address these issues?

By changing the default behavior of the PaymentIntents and SetupIntents API and making payment methods dynamic.

Furthermore, all of these changes can be easily configured through Stripe’s Dashboard with prebuilt UIs like Payment Element or Checkout.

No code required.

Find the full in-depth analysis here.

BYTE-SIZED TOPICS

USEFUL TOOLS & PACKAGES

TIP OF THE WEEK

This week’s tip is on JavaScript proxy objects!

A proxy object provides a way to intercept and customize operations for a target object.

This gives more control over the behavior of objects, allowing the opportunity to add additional functionalities like validation, logging, or data manipulation.

To create a proxy object, two different objects need to be created:

  1. target object

  2. handler object

Both can be created like so:

const target = { name: 'Alice', age: 30 };

const handler = {
  get(target, property) {
    if (property === 'age') {
      return target[property] + ' years';
    }
    return target[property];
  },
};

Then a proxy object can be created using both:

const proxy = new Proxy(target, handler);
console.log(proxy.name); // Output: Alice
console.log(proxy.age);  // Output: 30 years

The proxy object intercepts the property access of name and age with the get() handler. If the property name is equal to age, then we add the string years to the end of the age output.

MEME OF THE WEEK

Reply

or to participate.