Back to blog

How to Build a Crybaby Bot: Complete Automation Guide for Pop Mart Collectors

Crybaby drops sell out in minutes, leaving collectors empty-handed against reseller bots. Building an automated Crybaby bot gives genuine collectors a fighting chance by handling rapid checkouts, monitoring stock levels, and competing with professional resellers targeting these coveted blind box figurines. Ready to finally get that beautiful Crybaby figurine?

Zilvinas Tamulis

Sep 16, 2025

7 min read

Crybaby drops sell out in minutes, leaving collectors empty-handed against reseller bots. Building an automated Crybaby bot gives genuine collectors a fighting chance by handling rapid checkouts, monitoring stock levels, and competing with professional resellers targeting these coveted blind box figurines. Ready to finally get that beautiful Crybaby figurine?

What makes Crybaby worth automating

Crybaby stands apart in Pop Mart's extensive catalog as more than just another designer toy. Created by artist Molly Yllom, this character embodies complex emotions that resonate deeply with collectors who connect with its message about mental health and emotional expression.

The character has spawned multiple highly sought-after series, including "Crying Again", "Tears Factory", "Crying for Love", and special collaborations with brands like The Powerpuff Girls. Each series features 6 to 12 different designs, with secret editions appearing roughly once every 72 to 288 boxes, depending on the series.

What makes Crybaby particularly challenging to collect manually is the emotional investment fans have in these figures. These aren't impulse purchases but meaningful additions to collections, creating intense demand that far exceeds supply and makes automated purchasing essential for success.

Why manual purchasing fails for Crybaby

Building a bot becomes necessary when you understand the technical obstacles facing manual collectors during Crybaby releases.

Pop Mart's website implements sophisticated anti-bot measures, including CAPTCHAs, IP rate limiting, and geographic restrictions. During high-demand drops, server overload causes pages to load slowly or fail entirely, making manual navigation nearly impossible.

The blind box format amplifies competition, as buyers are unsure which figure they'll receive. Many collectors purchase multiple boxes, hoping for specific designs or secret editions, which multiplies demand and accelerates sellout times to mere minutes or seconds.

Professional resellers deploy automated purchasing systems that process transactions in milliseconds. Individual collectors using traditional browsers simply cannot match this speed, making bot development the only viable solution for securing desired figures.

Essential components for your Crybaby bot

Obtaining a Crybaby figure during limited Pop Mart drops requires assembling several essential technical components that work together.

Python forms the backbone of your automation setup, providing the programming foundation needed to execute your acquisition scripts. This versatile language provides the flexibility to handle Pop Mart's complex website architecture and respond to sudden changes in stock availability.

Playwright serves as your browser automation powerhouse, enabling your script to navigate Pop Mart's website with human-like precision. This tool seamlessly manages JavaScript-heavy pages, clicks buttons, and navigates like a real user would, and guides your purchase without manual intervention.

Chrome or Firefox browser drivers create the execution environment for Playwright. Version compatibility between your browser and driver ensures smooth operation during critical purchasing moments.

Residential proxies offer crucial protection by rotating IP addresses from real household devices. This prevents detection systems from flagging your activity and maintains access during high-traffic release periods.

A Pop Mart account is required to purchase certain products. Maintaining multiple active accounts significantly improves your success probability, as each account provides an independent pathway to secure limited releases when competition intensifies.

Analyzing Pop Mart's checkout process

Understanding Pop Mart's website architecture is essential for building a reliable Crybaby bot that can navigate the purchasing process efficiently.

Product pages follow consistent URL patterns, including collection identifiers and product codes. Crybaby products typically appear under "/us/products/[product number]/CRYBABY…" paths, making them easily targetable for automated monitoring and analysis.

Stock status changes trigger different page behaviors that your bot must recognize. "Add to Bag" buttons become disabled when items are unavailable, while live products show active purchase options with real-time inventory updates.

Anti-bot measures appear at various stages, requiring your automation to handle CAPTCHAs, verify user sessions, and maintain consistent browsing patterns that appear human-like to security systems.

Building the core bot architecture

To maximize your chances of success, you'll need a reliable bot that handles the Crybaby purchasing process automatically.

Step 1: Prepare your environment

First, make sure your system is ready by installing Python and the necessary libraries:

  • Install Python. Download the latest version from the official Python page. During setup, ensure that you add it to your system's PATH so that you can run scripts easily.
  • Install Playwright. Use the pip command in your terminal to install Playwright, and then run the additional command to download the required browser binaries.
pip install playwright
python -m playwright install

Pick a scheduler. Since your script will need to execute certain actions at set times, you’ll need a reliable way to schedule tasks. APScheduler is a solid choice for this:

pip install apscheduler

Step 2: Set up a project directory

Create a new folder to house your project, including your scripts and any output files. It’s also wise to keep things tidy and isolated by using a virtual environment. Open your terminal and navigate to your new folder:

cd path/to/your/project

The bot will consist of several files. You can either create them all up front or as you progress through the steps. For clarity, here’s how your project structure should look:

popmart-bot (folder)
    - data (folder)
        - products.json
    - job-scheduler.py
    - main.py 
    - popmart-scraper.py
    - purchase-bot.py

Step 3: Build the main script

Start by creating an entry point for your bot. Make a file named main.py and add the following code:

import subprocess
import time
from apscheduler.schedulers.blocking import BlockingScheduler
from datetime import datetime, timedelta
# Maximum number of retries for scraper
MAX_RETRIES = 5
RETRY_DELAY = 10
# Scheduled time for daily scraper run
HOUR = 6
MINUTE = 0
scheduler = BlockingScheduler()
def run_daily_scraper():
    # This function runs the popmart-scraper.py script and schedules job-scheduler.py to run shortly after.
    print(f"\nRunning popmart-scraper at {datetime.now().strftime('%H:%M:%S')}")
    for attempt in range(1, MAX_RETRIES + 1):
        print(f"Attempt {attempt} to run scraper...")
        try:
            subprocess.run(["python3", "popmart-scraper.py"], check=True)
            print("New arrival scraper launched successfully.")
            
            # Schedule job-scheduler to run shortly after
            run_time = datetime.now() + timedelta(seconds=5)
            scheduler.add_job(run_job_scheduler, trigger='date', run_date=run_time)
            print(f"The job-scheduler.py will run at {run_time.strftime('%H:%M:%S')}")
            return  # Exit early on success
        except subprocess.CalledProcessError as e:
            print(f"Scraper failed (attempt {attempt}) with exit code {e.returncode}")
            if attempt < MAX_RETRIES:
                print(f"Retrying in {RETRY_DELAY} seconds...")
                time.sleep(RETRY_DELAY)
    print("All attempts to run the scraper failed. Check popmart-scraper.py for issues.")
def run_job_scheduler():
    print(f"\nRunning job-scheduler.py")
    try:
        subprocess.run(["python3", "job-scheduler.py"], check=True)
    except subprocess.CalledProcessError as e:
        print(f"Job scheduler failed with exit code {e.returncode}")
        print("Please check job-scheduler.py for issues.")
if __name__ == "__main__":
    print("main.py started...")
    run_daily_scraper()  # run once immediately on startup
    # Schedule scraper to run daily at configured time
    scheduler.add_job(run_daily_scraper, 'cron', hour=HOUR, minute=MINUTE)
    print(f"Daily scraper has been scheduled to run at {HOUR:02d}:{MINUTE:02d} every day.")
    
    try:
        scheduler.start()
    except (KeyboardInterrupt, SystemExit):
        scheduler.shutdown()
        print("Scheduler stopped.")

Here’s what happens in the script:

  • Runs the web scraper. popmart-scraper.py is executed immediately when main.py starts.
  • Schedules automatic job processing. Once the scraper finishes successfully, it triggers job-scheduler.py to handle the scraped data.
  • Implements retry logic. If popmart-scraper.py fails, the script waits 10 seconds between attempts, retrying up to 5 times before giving up.
  • Sets up daily scraping. The script schedules popmart-scraper.py to run automatically every day at a specified time using a cron-style scheduler.

Step 4: Scrape the New Arrivals page

Next in the workflow is popmart-scraper.py. Here's what it contains:

import asyncio
import json
import os
from playwright.async_api import async_playwright
import sys
TARGET_KEYWORDS = ["CRYBABY", "Crybaby"]
BASE_URL = "https://www.popmart.com"
OUTPUT_FILE = os.path.join("data", "products.json")
# Proxy config (replace with your credentials)
PROXY_SERVER = "http://us.decodo.com:10001"
PROXY_USERNAME = "username"
PROXY_PASSWORD = "password"
async def scrape_popmart():
    print("New arrivals scraping started...")
    try:
        async with async_playwright() as p:
            browser = await p.chromium.launch(
                headless=True,
                proxy={"server": PROXY_SERVER}
                )
            
            context = await browser.new_context(
                proxy={
                    "server": PROXY_SERVER,
                    "username": PROXY_USERNAME,
                    "password": PROXY_PASSWORD
                }
            )
            page = await context.new_page()
            await page.goto("https://www.popmart.com/us/new-arrivals", timeout=30000)
            await page.wait_for_selector("div.index_title__jgc2z")
            # Try to close location popup if present
            try:
                await page.wait_for_selector("div.index_siteCountry___tWaj", timeout=15000)
                popup_selector = "div.index_siteCountry___tWaj"
                # Wait briefly (2 seconds) for popup to appear without failing if it doesn't
                await page.wait_for_selector(popup_selector, timeout=2000)
                await page.click(popup_selector)
                print("Closed location pop-up.")
            except Exception:
                # Popup not present -- continue normally
                print("No location pop-up detected.")
            # Close policy acceptance pop-up if present (after country pop-up)
            try:
                policy_selector = "div.policy_acceptBtn__ZNU71"
                # Wait until it's visible
                await page.wait_for_selector(policy_selector, timeout=8000, state="visible")
                # Get the element
                policy_btn = await page.query_selector(policy_selector)
                if policy_btn:
                    await asyncio.sleep(1)  # slight buffer for JS readiness
                    await policy_btn.click()
                    print("Clicked policy ACCEPT div.")
                else:
                    print("Could not find the policy ACCEPT div.")
            except Exception as e:
                print(f"Policy ACCEPT pop-up not detected or failed to click: {e}")
            results = []
            sections = await page.query_selector_all("div.index_title__jgc2z")
            for section in sections:
                release_date = (await section.text_content()).strip()
                # Get sibling product list container
                sibling = await section.evaluate_handle("el => el.nextElementSibling")
                product_cards = await sibling.query_selector_all("div.index_productCardCalendarContainer__B96oH")
                for card in product_cards:
                    # Product title
                    title_elem = await card.query_selector("div.index_title__9DEwH span")
                    title = await title_elem.text_content() if title_elem else ""
                    if not any(keyword.lower() in title.lower() for keyword in TARGET_KEYWORDS):
                        continue
                    # Release time
                    time_elem = await card.query_selector("div.index_time__EyE6b")
                    time_text = await time_elem.text_content() if time_elem else "N/A"
                    # Product URL
                    a_elem = await card.query_selector("a[href^='/us']")
                    href = await a_elem.get_attribute("href") if a_elem else None
                    full_url = f"{BASE_URL}{href}" if href else "N/A"
                    # Build entry
                    result = {
                        "title": title.strip(),
                        "release_date": release_date.strip(),  # Raw text like "Upcoming JUL 11"
                        "release_time": time_text.strip(),     # Raw text like "09:00"
                        "url": full_url
                    }
                    results.append(result)
            await browser.close()
            # Save to JSON
            os.makedirs("data", exist_ok=True)
            with open(OUTPUT_FILE, "w", encoding="utf-8") as f:
                json.dump(results, f, indent=2, ensure_ascii=False)
            print(f"Scraped {len(results)} matching products. Saved to {OUTPUT_FILE}")
    except Exception as e:
        print(f"Error during scraping: {e}")
        sys.exit(1)  # Exit with error code 1 on failure
if __name__ == "__main__":
    asyncio.run(scrape_popmart())

The script navigates to the New Arrivals page and collects information on product release schedules. It saves product names, release dates, times, and URLs to data/products.json.

Additionally, it:

  • Handles website popups and navigation. Automatically detects and closes location selection and policy acceptance pop-ups.
  • Uses a proxy server for web requests. All browser traffic is routed through a proxy with authentication, which helps bypass restrictions or rate limits. In this example, Decodo’s residential proxies are utilized for reliable and secure scraping.
  • Filters products by keywords. Only products with titles containing "CRYBABY" or "Crybaby" are collected, ignoring the rest.

Step 5: Configure a job scheduler

The job-scheduler.py script is the core of automating your scraping tasks:

import json
from datetime import datetime
from apscheduler.schedulers.background import BackgroundScheduler
import subprocess
import os
import time
DATA_FILE = os.path.join("data", "products.json")
MAX_RETRIES = 5
RETRY_DELAY = 10
def parse_release_datetime(date_str, time_str):
    # Convert strings like "Upcoming JUL 11" and "09:00" into a datetime object. Assumes the current year.
    try:
        # Remove unwanted keywords
        for keyword in ["Upcoming", "In Stock"]:
            date_str = date_str.replace(keyword, "").strip()
        
        full_date_str = f"{date_str} {datetime.now().year} {time_str}"
        # Example: "JUL 11 2025 09:00"
        return datetime.strptime(full_date_str, "%b %d %Y %H:%M")
    except Exception as e:
        print(f"Failed to parse datetime from '{date_str} {time_str}': {e}")
        return None
def launch_purchase_bot(product):
    # Launch purchase-bot.py with retry logic
    url = product.get("url")
    title = product.get("title")
    
    for attempt in range(MAX_RETRIES + 1):  # +1 for initial attempt
        print(f"Launching purchase bot for '{title}' (attempt {attempt + 1}/{MAX_RETRIES + 1}) at {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}")
        
        try:
            # Run the purchase bot and wait for it to complete
            result = subprocess.run(
                ["python3", "purchase-bot.py", url],
                capture_output=True,
                text=True,
                timeout=300  # 5 minute timeout
            )
            
            if result.returncode == 0:
                print(f"✅ Purchase bot succeeded for '{title}' on attempt {attempt + 1}")
                return  # Success - exit the retry loop
            else:
                print(f"Purchase bot failed for '{title}' on attempt {attempt + 1}")
                print(f"Return code: {result.returncode}")
                print(f"STDOUT: {result.stdout}")
                print(f"STDERR: {result.stderr}")
                
        except subprocess.TimeoutExpired:
            print(f"⏰ Purchase bot timed out for '{title}' on attempt {attempt + 1}")
        except Exception as e:
            print(f"💥 Exception running purchase bot for '{title}' on attempt {attempt + 1}: {e}")
        
        # If this wasn't the last attempt, wait before retrying
        if attempt < MAX_RETRIES:
            print(f"⏳ Waiting {RETRY_DELAY} seconds before retry...")
            time.sleep(RETRY_DELAY)
    
    print(f"All {MAX_RETRIES + 1} attempts failed for '{title}'.")
def schedule_all_jobs_from_json(json_path):
    scheduler = BackgroundScheduler()
    job_count = 0
    with open(json_path, "r", encoding="utf-8") as f:
        products = json.load(f)
    for product in products:
        run_time = parse_release_datetime(product["release_date"], product["release_time"])
        if not run_time:
            continue
        if run_time < datetime.now():
            continue
        
        scheduler.add_job(launch_purchase_bot, "date", run_date=run_time, args=[product])
        print(f"🧸 Scheduled '{product['title']}' for {run_time}")
        job_count += 1
    if job_count == 0:
        print("No upcoming valid jobs found in JSON. Nothing scheduled.")
        return
    scheduler.start()
    print("Scheduler started. Jobs will run at their scheduled times.")
    try:
        # Keep the scheduler alive
        while True:
            pass
    except (KeyboardInterrupt, SystemExit):
        scheduler.shutdown()
        print("Scheduler stopped.")
if __name__ == "__main__":
    schedule_all_jobs_from_json(DATA_FILE)

The script reads data from products.json, parses it, and schedules the bot to run at each product’s release time.

Step 6: Automate the purchase

The last script handles the most important part – automatically adding your Crybaby to the shopping cart.

import sys
import asyncio
from playwright.async_api import async_playwright
# Proxy config (replace with your credentials)
PROXY_SERVER = "http://us.decodo.com:10001"
PROXY_USERNAME = "username"
PROXY_PASSWORD = "password"
async def run(url):
    try:
        async with async_playwright() as p:
            browser = await p.chromium.launch(
                headless=False, # Visible browser for purchase
                proxy={"server": PROXY_SERVER}
                ) 
            context = await browser.new_context( # Create a new incognito context
                proxy={
                    "server": PROXY_SERVER,
                    "username": PROXY_USERNAME,
                    "password": PROXY_PASSWORD
                }
            )  
            page = await browser.new_page()
            await page.goto(url)
            # Try to close location popup if present
            try:
                await page.wait_for_selector("div.index_siteCountry___tWaj", timeout=15000)
                popup_selector = "div.index_siteCountry___tWaj"
                # Wait briefly (2 seconds) for popup to appear without failing if it doesn't
                await page.wait_for_selector(popup_selector, timeout=2000)
                await page.click(popup_selector)
                print("Closed location pop-up.")
            except Exception:
                # Popup not present -- continue normally
                print("No location pop-up detected.")
            # Close policy acceptance pop-up if present (after country pop-up)
            try:
                policy_selector = "div.policy_acceptBtn__ZNU71"
                # Wait until it's visible
                await page.wait_for_selector(policy_selector, timeout=8000, state="visible")
                # Get the element
                policy_btn = await page.query_selector(policy_selector)
                if policy_btn:
                    await asyncio.sleep(1)  # slight buffer for JS readiness
                    await policy_btn.click()
                    print("Clicked policy ACCEPT div.")
                else:
                    print("Could not find the policy ACCEPT div.")
            except Exception as e:
                print(f"Policy ACCEPT pop-up not detected or failed to click: {e}")
            
            # Wait for ADD TO BAG button and click it
            add_to_bag_selector = "div.index_usBtn__2KlEx.index_red__kx6Ql.index_btnFull__F7k90"
            
            # Wait and click button safely
            try:
                await page.wait_for_selector(add_to_bag_selector, timeout=15000)  # 15 seconds timeout
                await page.click(add_to_bag_selector)
                print("Clicked 'ADD TO BAG' button.")
            except Exception as e:
                print(f"Failed to find or click 'ADD TO BAG' button: {e}")
                await browser.close()
                return 1  # Return error code
            
            await asyncio.sleep(3)  # Give it time to process
            # Go to the shopping cart page
            try:
                await page.goto("https://www.popmart.com/us/largeShoppingCart")
                print("Navigated to shopping cart.")
                # Click the checkbox to select all items
                await page.click("div.index_checkbox__w_166")
                # Keep the browser open to allow manual checkout
                print("Browser will stay open for manual checkout. Close it when done.")
                #await asyncio.Future()  # Keeps script running indefinitely until manually closed
                await page.wait_for_event("close", timeout=0)  # Wait until user closes the visible browser tab
            except Exception as e:
                print(f"Failed during checkout preparation: {e}")
                return 1  # Return error code
            finally:
                await context.close() # Clean up incognito session
                await browser.close() # Fully shut down Playwright
            
            return 0  # Success
            
    except Exception as e:
        print(f"Fatal error in purchase bot: {e}")
        return 1  # Return error code
if __name__ == "__main__":
    if len(sys.argv) < 2:
        print("Usage: python3 purchase-bot.py <product_url>")
        sys.exit(1)
    
    product_url = sys.argv[1]
    exit_code = asyncio.run(run(product_url))
    sys.exit(exit_code)

The script visits each product URL at its release time, clicks the ADD TO BAG button, and then opens the shopping cart. The browser remains open, allowing you to log in and complete the purchase manually.

Step 7: Launch the bot

To start the Crybaby scraper bot, run main.py from your terminal:

python main.py

Advanced bot strategies for competitive drops

Sophisticated Crybaby collecting requires advanced techniques that provide an edge over basic automation systems.

Multi-account coordination enables purchasing from several accounts simultaneously, increasing chances of securing limited quantities. Account management systems handle authentication and checkout across multiple user profiles.

Predictive purchasing utilizes machine learning models to identify optimal buying moments by analyzing historical data and current market conditions. These systems can trigger purchases before obvious availability signals appear.

Inventory forecasting analyzes restock patterns and supply chain information to predict when sold-out items are likely to return. Advanced bots can position themselves to capitalize on restock opportunities that manual users may miss entirely.

Community intelligence gathering monitors collector forums, social media, and trading communities for insider information about upcoming releases and market trends.

Testing and deployment strategies

Reliable Crybaby bot operation requires thorough testing and careful deployment to ensure success during actual drops.

Sandbox testing validates bot functionality using non-competitive products and off-peak periods. Testing checkout processes with low-value items helps prevent costly errors during major releases.

Performance optimization identifies bottlenecks in bot response times and resource usage. Competitive drops demand maximum efficiency from every system component.

Monitoring and alerting systems track bot performance and notify operators of failures or unusual conditions. Real-time oversight enables quick intervention when problems arise.

Backup system preparation ensures redundant capabilities in the event that primary bot systems experience failures. Multiple deployment environments prevent single points of failure during critical operations.

Bottom line

Building a Crybaby bot transforms frustrating manual collecting into a systematic competitive advantage. The combination of intelligent monitoring and advanced anti-detection measures creates a robust system capable of securing limited-edition figures in today's hyper-competitive market.

Success requires careful attention to technical details, thorough testing, and continuous optimization as Pop Mart evolves their anti-bot measures. However, the investment in proper bot development pays dividends by consistently securing figures that manual collectors can only dream of obtaining.

Your next step is to implement these automation strategies and build the technical foundation that transforms sellout frustration into collecting success. The Crybaby community continues growing, but with a well-built bot, you can stay ahead of the competition and secure those emotionally resonant figures that make this hobby so rewarding.

Try residential proxies for free

Start your 3-day free trial and run your automation tools without CAPTCHAs, IP bans, or geo-restrictions.

About the author

Zilvinas Tamulis

Technical Copywriter

A technical writer with over 4 years of experience, Žilvinas blends his studies in Multimedia & Computer Design with practical expertise in creating user manuals, guides, and technical documentation. His work includes developing web projects used by hundreds daily, drawing from hands-on experience with JavaScript, PHP, and Python.


Connect with Žilvinas via LinkedIn

All information on Decodo Blog is provided on an as is basis and for informational purposes only. We make no representation and disclaim all liability with respect to your use of any information contained on Decodo Blog or any third-party websites that may belinked therein.

Frequently asked questions

Why do Crybaby figures sell out so fast?

Crybaby is one of Pop Mart’s most popular figurine collections, with secret editions and collaborations driving massive demand. Combined with professional reseller bots and blind box buying patterns, drops often sell out in seconds.

What skills do I need to build a Crybaby bot?

Basic knowledge of Python, browser automation (e.g., Playwright), and proxies is essential. Familiarity with scheduling tools and web scraping helps improve reliability, while understanding Pop Mart’s website structure ensures the bot can adapt to anti-bot protections.

How to Build a Labubu Bot with Residential Proxies: Step-by-Step Guide

Getting your hands on limited-edition Labubu figures like The Monsters, Big Into Energy, or Labubu Exciting Macaron has gotten absolutely wild lately. Once Pop Mart announced that they would no longer sell these figures in physical stores, the online shop got some serious traffic. But we're also talking about these Pop Mart drops selling out in literal seconds – forget about trying to cop manually, you'll never make it past the cart page. And thousands of users complain that they can’t cop a mystery box for themselves. Luckily, there’s an automated solution that’ll help you to get at least a couple of Labubus. The trusty bot, tuned with residential proxies, will handle CAPTCHAs, geo-blocks, and IP bans, helping you reach the checkout page as quickly as possible. This guide breaks down all the tactics that actually work, straight from people who've been crushing these drops consistently.

Zilvinas Tamulis

Jul 09, 2025

9 min read

🐍 Python Web Scraping: In-Depth Guide 2025

Welcome to 2025, the year of the snake – and what better way to celebrate than by mastering Python, the ultimate "snake" in the tech world! If you’re new to web scraping, don’t worry – this guide starts from the basics, guiding you step-by-step on collecting data from websites. Whether you’re curious about automating simple tasks or diving into more significant projects, Python makes it easy and fun to start. Let’s slither into the world of web scraping and see how powerful this tool can be!

Zilvinas Tamulis

Feb 28, 2025

15 min read

© 2018-2025 decodo.com. All Rights Reserved