Version 47.0

Written & Compiled by Macklin Andrick, GPJ Sr. Creative Technologist
GPJ’s Creative Technology practice supercharges the impact of brand stories told through the strategic use of innovative tech and AI. Strap in as we dive into a whirlwind of innovation and spectacle!
This week’s Creative Tech Byte is less about new frontiers and more about the great digital clean-up, as major platforms try to make AI play by the rules. Spotify is taking on “AI slop” with a standardized system for disclosing AI use in music credits, following the removal of over 75 million fraudulent tracks. Meanwhile, OpenAI’s Responses API enables next-gen agents to remember and follow through on complex, multi-step tasks. The AI gold rush is shifting from launching new models to building the infrastructure and guardrails to make them more reliable.
Mixboard is a new way to visualize your ideas from Google Labs.
Google Labs has launched Mixboard, an AI-powered concepting board to quickly visualize, explore, and refine ideas. The open-canvas tool lets users start with a text prompt, a pre-populated board, or imported images. Mixboard leverages generative AI, including the Nano Banana image editing model, to generate visuals and edit them with natural language, such as combining images or making small changes. One-click options like “regenerate” and “more like this” create new versions, and it can generate text based on the context of images on the board. Mixboard is currently a public beta in the U.S., making AI a more accessible tool for creative exploration across concepts from home decor to product design.


Spotify is finally taking steps to address its AI slop and clone problem
Spotify recently announced updated policies to combat misuse of AI-generated music and protect artists on its platform. The company is strengthening its impersonation policy to provide stronger protections and clearer recourse for artists whose voices have been cloned without authorization. This fall, Spotify will introduce a new music spam filter to identify and stop recommending “AI slop,” mass uploads, and other fraudulent content, after removing over 75 million spammy tracks in the last year. Spotify is also supporting a new industry standard for transparency by displaying specific AI disclosures in music credits, developed with DDEX, indicating whether AI was used for vocals, instrumentation, or post-production. These measures are intended to maintain trust across the platform and protect artist identity while supporting the responsible use of AI for creative purposes.
Why OpenAI built the Responses API
OpenAI has introduced the Responses API (/v1/responses) as the recommended method for integrating its most advanced models, including GPT-5, built for the “agentic future.” It lets AI act like a dedicated detective who remembers every step and detail of a conversation, enabling more reliable, complex, multi-step interactions. Designed as a structured reasoning-and-action loop, the API preserves reasoning state across multiple turns to improve efficiency and performance, and its native multimodal support (text, image, audio) makes interactions more natural and powerful. The result of this new architecture: more consistent, less error-prone features that can complete long-term projects on their own.


Figma made its design tools more accessible to AI agents
Figma has significantly upgraded its Model Context Protocol (MCP) server to deeply integrate design files with AI coding agents and tools like Figma Make. The server now extends beyond the desktop app, enabling remote access from IDEs like Anthropic, Cursor, VS Code, and browser-based AI models. AI agents can generate code and interact with prototypes using the structured, semantic data from Figma files (not just screenshots), accelerating developer workflows. The goal is to create a seamless bridge between design and code, allowing ideas in Figma Make to evolve into production-ready features without manual rewrites.
Movie Studio Lionsgate is Struggling to Make AI-Generated Films With Runway
Hollywood studio Lionsgate’s partnership with AI video company Runway has been unproductive over the past year due to limitations in the AI’s training data. The core issue is that the studio’s extensive catalog, which includes franchises like John Wick and The Hunger Games, is too small for the AI to produce quality, full-length generated films. Experts note that AI models require massive amounts of media to create professional content, and even huge collections like the Disney catalog would be insufficient. The project is complicated by the legal “gray area” surrounding actor rights and compensation when their likeness is used in AI-generated clips, further highlighting the current technological limits of AI video and the broader legal challenges facing the entertainment industry.


How Anthropic teams use Claude Code
Anthropic has published a guide on how its teams use the Claude Code command-line tool for agentic coding across departments. The document outlines how the AI is used by engineers, data scientists, and even non-technical staff to tackle complex projects and automate workflows. The guide emphasizes that this tool has enabled teams to achieve significant time savings (up to 2-4x), build applications in unfamiliar languages, and shift from building temporary tools to persistent ones.
For more GPJ Creative & Strategic reporting, visit our News & Insights page.