Back to Home
    Download PDF
    Resume Review

    Willy Lin

    AI Engineer | NLP Engineer | LLM Data Scientist

    Overall Assessment

    Overall Score

    60/100 → 90/100

    (After Implementation)

    60%
    Before
    90%
    After

    Your resume has strong education and technical skills sections, but weak presentation and showcase of those skills in a format that works for recruiters and hiring managers.

    First, positioning confusion from trying to be everything to everyone—your resume mixes AI engineering with data engineering, data analytics, and general ML work, creating uncertainty about what role you're actually targeting.

    Second, poor judgment signaled by length—you're using two pages with 13 bullet points for less than one year of primary experience (two pages should be reserved for professionals with 8+ years of substantial achievements).

    Third, duty-based language throughout instead of results-focused value proposition—you're telling employers what you were responsible for rather than what outcomes you delivered, with zero bullets following the XYZ framework of "Accomplished [X] as measured by [Y] by doing [Z]."

    However, you have underlying qualifications for an early-career AI professional. Your hands-on Qwen3-8B training experience, GRPO optimization work, multilingual capabilities, production deployment knowledge, and MSc from University of Bath.

    The problem isn't your experience, it's how you're presenting it.

    What's Working Well

    • Strong Educational Foundation: MSc Data Science from University of Bath (UK) plus BSc Applied Mathematics demonstrates solid quantitative background—education section is well-structured and appropriate
    • Relevant Technical Skills Section: Python, PyTorch, TensorFlow, LLM training frameworks (Axolotl), and infrastructure tools (HPC, RunPod, Vast.ai) are clearly listed—skills section is comprehensive and well-organized
    • Qwen3-8B LLM Training Experience: Direct hands-on experience with model training, fine-tuning, and GRPO optimization aligns with all three target roles
    • Taiwan AI Certifications: iPAS AI Application Planner certification adds local credibility for Taiwan-based positions (Foxconn)

    What Can Be Improved

    • Unprofessional Email Address: "willy1234willy123413@gmail.com" looks unprofessional—use firstname.lastname@gmail.com format (e.g., willy.lin@gmail.com)
    • Missing Critical Contact Information: No phone number or LinkedIn URL in header—recruiters need multiple ways to reach you and will check LinkedIn before calling
    • Basic Format with No Visual Hierarchy: Current layout is flat and doesn't guide the reader's attention to your strongest qualifications—needs structure to help recruiters scan efficiently
    • Summary is Duty-Oriented, Not Result-Oriented: You're telling employers what you did instead of the measurable results you achieved—no metrics, no outcomes, no proof of impact
    • Confusing Title: "AI Engineer – Large Language Models & Data Pipelines": Why combine both? This creates positioning confusion—pick ONE primary identity per application
    • 13 Bullet Points for <12 Month Role is Overkill: More bullets = weaker impact. Keep 4-5 of only the most relevant points for the target job
    • Duty-Based Language Throughout: Bullets describe responsibilities ("Led and executed," "Designed and delivered") instead of outcomes achieved—needs complete rewrite using XYZ framework
    • Separate "Internship Experience" Section: Should be combined into general "Experience" section, not isolated

    Current State vs. Optimal State

    ElementCurrent StateOptimal StatePriority
    Contact InformationUnprofessional email (willy1234willy123413@gmail.com), missing phone and LinkedInwilly.lin@gmail.com (or firstname.lastname format) + phone number + LinkedIn URLHIGH
    Format & Visual HierarchyBasic, flat layout with no structure to guide reader attentionClear sections with white space, strategic use of bold/headers to highlight strengthsHIGH
    Role Positioning"AI Engineer – Large Language Models & Data Pipelines" creates identity confusionChoose ONE per application: "AI Engineer" OR "NLP Engineer" OR "LLM Data Scientist"HIGH
    Resume Length2 pages for <1 year primary experience signals poor judgment1 page maximum—forces prioritization of highest-impact content onlyHIGH
    Summary OrientationDuty-focused ("specializing in," "hands-on experience delivering") with no quantified resultsResult-focused with metrics: token volumes processed, model performance improvements, deployment speedHIGH
    Summary StructureDense single paragraph, confusing first line mixing "AI Engineer" with "Data Science Master's"3-4 concise sentences with clear role identity, quantified achievements upfrontHIGH
    Bullet Point Volume13 bullets for single 9-month role dilutes impact4-5 highest-impact bullets only, each following XYZ frameworkHIGH
    Bullet StructureDuty-based language ("Led and executed," "Designed and delivered") with no measurable outcomesXYZ framework: Accomplished [X] as measured by [Y], by doing [Z]HIGH

    Key Improvements Explained

    We identified 12 strategic transformations to position you optimally across your target roles. Here are the highest-impact changes:

    #1 Compress to One Page

    Compress to One Page by Removing 8+ Bullets & Eliminating White Space

    Current Version (Length Signals Poor Judgment):

    Two pages for <1 year experience is a red flag: Recruiters expect 1 page for 0-8 years experience; two pages signals you can't prioritize or edit

    13 bullets dilutes your strongest achievements: Every additional bullet reduces the impact of your best work—recruiters will skim or skip entirely

    Many bullets are redundant: Data pipeline bullets repeat similar information; infrastructure bullets overlap; multi-modal integration is mentioned twice

    Optimized Experience Section:

    EXPERIENCE

    Wanda AI Technology Co., Ltd. | Taipei, Taiwan

    AI Engineer | Apr 2025 – Present

    • Trained and optimized Qwen3-8B LLM for multilingual conversational AI (Chinese/English/Japanese), processing 20B pretraining tokens and 6-7B fine-tuning tokens, achieving [X]% improvement in persona consistency and [Y]% reduction in hallucination rate vs. baseline Qwen model through GRPO reinforcement learning with rubric-based reward modeling
    • Built production-grade data pipeline processing 500M+ tokens monthly using Python multi-threading and JSON/JSONL normalization, reducing data preparation time by [X]% and enabling 3x faster training iteration cycles for continuous model improvement
    • Deployed low-latency embedding inference service using Go + LibTorch + gRPC, reducing model initialization overhead by [X]ms and supporting 1000+ QPS for real-time RAG retrieval in production virtual assistant platform
    • Established LLM evaluation framework benchmarking RAG capability, multi-turn coherence, and proactive response behavior against ChatGPT API baseline, identifying [X] critical improvement areas that guided GRPO optimization priorities

    Data Analytics Training Program | Taiwan

    Data Analyst Intern | Dec 2024 – Mar 2025

    • [1 bullet showing business impact using XYZ framework]

    Why This Works:

    • One page forces prioritization of only your highest-impact, most relevant work
    • 4-5 bullets create focus on achievements that directly prove you can succeed in target roles
    • Each bullet follows XYZ framework: Accomplished [X] as measured by [Y], by doing [Z]—shows outcomes, not duties
    • Unified Experience section eliminates artificial fragmentation between "work" and "internship"
    • Quantified metrics (tokens processed, improvement percentages, latency reductions) replace vague descriptions
    • Strategic keyword placement: GRPO, RAG, embedding, multi-turn coherence, hallucination reduction all appear naturally

    Impact: One-page resume demonstrates judgment, forces you to articulate only your strongest value propositions, and ensures recruiters actually read your content instead of skimming or skipping.

    #2 Rewrite Summary

    Rewrite Summary from Duty-Focus to Result-Focus with Clear Role Identity

    Current Version (Tells What You Did, Not What You Achieved):

    First line creates identity confusion: "AI Engineer with a Master's degree in Data Science"—is your degree your job title? This makes it unclear what role you're actually targeting

    Zero quantified results: No metrics, no outcomes, no proof of impact—just a list of areas you "specialized in" and tasks you have "experience with"

    Dense paragraph is unreadable at a glance: One 100+ word block of text defeats the purpose of a summary—recruiters won't read this

    Passive language throughout: "specializing in," "hands-on experience," "worked on," "strong focus"—all duty-oriented, none result-oriented

    No differentiation: Nothing here proves you're better than the 50 other candidates who also have "LLM training experience"

    Subtitle adds more confusion: "LARGE LANGUAGE MODELS & DATA PIPELINES"—are you an AI Engineer or a Data Engineer?

    Optimized Version - AI Engineer:

    "AI Engineer with 9 months specialized experience training production LLMs, including Qwen3-8B model optimization achieving [X]% improvement in persona consistency through GRPO reinforcement learning. Built data pipelines processing 26-27B tokens for multilingual training (Chinese/English/Japanese) and deployed low-latency embedding service supporting 1000+ QPS for production RAG retrieval. MSc Data Science (University of Bath, UK) with expertise in PyTorch, Transformers, and HPC-based model training infrastructure."

    Optimized Version - NLP Engineer:

    "NLP Engineer specializing in multilingual LLM development, with 9 months experience fine-tuning Qwen3-8B for Chinese/English/Japanese conversational AI and achieving [X]% improvement in multi-turn coherence through GRPO optimization. Built production NLP pipelines processing 500M+ tokens monthly with automated data cleaning, deduplication, and schema validation, enabling 3x faster training cycles. MSc Data Science (University of Bath, UK) with expertise in text processing, transformer architectures, and model evaluation frameworks."

    Why This Works:

    • Clear role identity upfront: "AI Engineer" or "NLP Engineer" (not both at once)—removes all positioning confusion
    • Quantified experience first: "9 months specialized experience training production LLMs" sets realistic expectations while emphasizing depth
    • Three specific, measurable achievements: Token volumes, performance improvements, efficiency gains—proves you deliver results
    • Strategic keyword loading: Qwen3-8B, GRPO, multilingual, PyTorch, Transformers, HPC—hits major ATS requirements
    • Education positioned as credential: MSc provides credibility without creating identity confusion
    • Readable structure: Three sentences with clear hierarchy—recruiters can scan this in 6 seconds

    Impact: Summary is the most important 60 words on your resume—it determines whether recruiters read the rest. Result-focused summary with clear positioning and quantified achievements makes them want to keep reading.

    #3 Convert All Bullets to XYZ Framework

    Convert All Bullets to XYZ Framework (Accomplished [X] Measured by [Y] by Doing [Z])

    Current Version (Duty-Based Language):

    Every single bullet is duty-focused: "Led and executed," "Designed and delivered," "Owned," "Built," "Implemented"—all describe responsibilities, not results

    Zero measurable outcomes: No performance improvements, no efficiency gains, no business impact metrics

    "Significantly reducing" is meaningless: How much? 10%? 50%? 90%? Vague adjectives don't prove anything

    No context for why these tasks mattered: What problem did this solve? What was the before state? What changed after your work?

    Reads like a job description: These could be copy-pasted from a job posting—nothing here proves YOU specifically delivered value

    Optimized Version (XYZ Framework):

    • Improved Qwen3-8B conversational quality by [X]% for multilingual use cases (Chinese/English/Japanese), measured by human evaluation scores for persona consistency and response appropriateness, by implementing GRPO reinforcement learning with Qwen-14B reward model and three custom rubric-based evaluation prompts targeting hallucination reduction, context coherence, and proactive engagement
    • Reduced LLM training iteration time by [Y]% (from [A] days to [B] days per cycle), enabling 3x faster model experimentation, by building Python multi-threading data pipeline automating generation, cleaning, deduplication, and JSON/JSONL validation for 26-27B tokens (20B pretraining + 6-7B fine-tuning)
    • Decreased embedding inference latency by [Z]ms (achieving <[X]ms 95th percentile), supporting 1000+ queries per second for production RAG retrieval, by developing Go-based embedding service using LibTorch and gRPC that eliminated Python interpreter overhead and enabled concurrent request processing
    • Identified [N] critical model improvement areas increasing training ROI by [X]%, measured by cost-per-quality-point improvement, by establishing comparative evaluation framework benchmarking Qwen3-8B against ChatGPT API across RAG accuracy, multi-turn coherence, and persona stability metrics

    Why This Works:

    • Clear [X] outcome stated first: What you accomplished, with specific metric
    • Measurable [Y] proof provided: How you quantified the improvement—percentages, time savings, latency reductions
    • Specific [Z] method explained: What you actually did to achieve the result—tools, techniques, approaches
    • Business context clear: Why this mattered—faster iteration, lower latency, better ROI
    • Competitive differentiation: These bullets prove you can deliver measurable improvements, not just complete tasks

    Note: You'll need to add the actual metrics (marked with brackets like [X]%, [Y] days, [Z]ms). If you don't have exact numbers, use conservative estimates based on your observations: "approximately 40% faster," "reduced from 7 days to 5 days," "achieved <50ms p95 latency." Never fabricate, but do quantify.

    Impact: XYZ framework transforms duty lists into proof of capability. Recruiters and hiring managers want to know what results you can deliver for them—this structure answers that question directly.

    #4 Merge Internship into Experience

    Merge "Internship Experience" into Main "Experience" Section

    Current Version (Artificial Fragmentation):

    Separate sections create fragmentation: Makes your timeline look artificially thin by isolating internship

    "Internship Experience" sounds junior: Professional resumes use unified "Experience" sections

    Wastes vertical space: Section headers consume valuable lines on a one-page resume

    Disrupts chronological flow: Readers expect reverse chronological order in one unified section

    Optimized Version:

    EXPERIENCE

    Wanda AI Technology Co., Ltd. | Taipei, Taiwan

    AI Engineer | Apr 2025 – Present

    [4-5 optimized bullets using XYZ framework]

    Data Analytics Training Program | Taiwan

    Data Analyst Intern | Dec 2024 – Mar 2025

    [1-2 optimized bullets using XYZ framework showing business impact]

    Why This Works:

    • Unified section looks more substantial: Professional standard for all experience levels
    • Saves vertical space: Eliminates redundant section header, freeing lines for content
    • Clear reverse chronological order: Most recent role first, then internship—natural reading flow
    • Internship isn't hidden: Still clearly labeled with dates, just not isolated in separate section

    Impact: Small structural change eliminates amateurish fragmentation and makes your timeline look more cohesive.

    #5 Education, Skills, Certifications

    Confirm Education, Skills, and Certifications Are Already Optimized

    Current Version (Already Strong):

    EDUCATION

    University of Bath, United Kingdom — MSc in Data Science

    Chinese Culture University, Taiwan — BSc in Applied Mathematics

    TECHNICAL SKILLS

    • Languages: Python, JavaScript, Go
    • LLMs & NLP: Qwen3-8B, Transformers, NLP, Prompt Engineering
    • Model Training: CPR, SFT, QAT, GRPO
    • Data & Pipelines: Data generation, data cleaning, JSON/JSONL schema design
    • Frameworks & Tools: PyTorch, TensorFlow, Scikit-learn, Axolotl
    • Infrastructure: HPC, RunPod, Vast.ai, Remote GPU environments
    • Visualization & BI: Pandas, Matplotlib, Plotly, Power BI
    • Version Control: Git, GitHub

    CERTIFICATIONS

    iPAS AI Application Planner (Intermediate) – Ministry of Economic Affairs, Taiwan

    AI & Digital Innovation Programs – CMRI Digital Innovation Institute

    Assessment:

    • These sections are already well-structured and appropriate
    • Education clearly formatted: University, degree, location—all essential information present
    • Skills logically grouped: By category (Languages, LLMs & NLP, Model Training, etc.) instead of alphabetical dump
    • Certifications provide local credibility: Taiwan-specific credentials valuable for Foxconn application
    • No unnecessary details: No GPA (appropriate for experienced professionals), no irrelevant coursework

    Only Minor Enhancement Needed: Add role-specific keywords to Skills section when customizing for each application:

    • For NLP role: Add SpaCy, NLTK, Gensim, Word2Vec, NER, Text Classification
    • For RAG role: Add LangChain (study), RAG Architectures, LlamaIndex (familiar), Azure OpenAI (familiar)
    • For Microsoft: Add ML Systems, Model Serving, Online Learning (if applicable)

    Impact: These sections already meet professional standards—don't waste time over-optimizing them when other sections need critical fixes.

    Strategic Positioning & ATS Optimization

    Role Clarity Strategy: Create Three Customized Versions

    You sent three different job types, which require three different positioning approaches. You cannot use one resume for all three and expect good results. Here's how to customize:

    Version 1: AI Engineer (Microsoft Applied Scientist 2)

    • Title: "AI Engineer"
    • Summary Focus: LLM training, model optimization, reinforcement learning, evaluation frameworks
    • Keyword Emphasis: GRPO, Qwen3-8B, model training, benchmarking, PyTorch, large-scale systems
    • Bullet Emphasis: Training pipeline efficiency, model performance improvements, evaluation framework design
    • Skills Section: Remove BI tools (Power BI, Plotly), add ML Systems, Model Serving if applicable

    Version 2: NLP Engineer (Foxconn Type 2)

    • Title: "NLP Engineer"
    • Summary Focus: Multilingual NLP, text processing, Chinese/English/Japanese language models
    • Keyword Emphasis: NLP, text mining, multilingual, transformers, fine-tuning, Hugging Face
    • Bullet Emphasis: Language-specific capabilities, text processing pipelines, NLP algorithm implementation
    • Skills Section: ADD SpaCy, NLTK, Gensim, Word2Vec, text classification, NER, POS tagging

    Version 3: LLM Data Scientist (RAG/Agentic AI Role)

    • Title: "LLM Data Scientist" or "GenAI Engineer"
    • Summary Focus: RAG systems, knowledge retrieval, evaluation frameworks, production deployment
    • Keyword Emphasis: RAG, retrieval-augmented generation, LangChain, evaluation, embeddings, context-aware
    • Bullet Emphasis: Reframe evaluation work as RAG capability testing, emphasize embedding service for retrieval
    • Skills Section: ADD LangChain (study), RAG Architectures, LlamaIndex (familiar), Azure OpenAI (familiar), Cohere (familiar)

    Honest Approach for Missing Keywords:

    • "(study)": For tools you're currently learning (LangChain, LlamaIndex)
    • "(familiar)": For tools you understand conceptually but haven't used extensively (Azure OpenAI, Cohere)
    • Reframe existing work: Your evaluation framework DID test retrieval capabilities—calling it "RAG capability evaluation" is truthful
    • Don't fabricate: If you genuinely have zero exposure to a tool, don't claim it—focus on adjacent experience instead

    Resume Effectiveness Improvement

    Before Optimization

    • Overall Score: 60/100
    • Two pages for <1 year experience
    • 13 duty-based bullets with zero measurable outcomes
    • Unprofessional email, missing phone/LinkedIn
    • Role positioning confusion
    • 39-61% keyword coverage depending on target role

    After Optimization

    • Overall Score: 90/100
    • One page with 4-5 high-impact achievement bullets
    • Every bullet follows XYZ framework with quantified results
    • Professional contact information with all required elements
    • Clear, focused role identity per application
    • 77-100% keyword coverage depending on target role

    Key Metrics

    ATS Pass-Through Rate Improvement:

    • ATS Pass-Through Rate: 55-61% → 77-100% (depending on target role)
    • Recruiter Read Time: 2 pages (likely skimmed) → 1 page (fully read)
    • Bullet Impact: 13 weak bullets → 4-5 strong bullets (2.6x stronger per bullet)
    • Role Clarity: Confused positioning → Clear, focused identity

    Estimated Application Success Rate Improvement:

    • Foxconn NLP Engineer: 30% → 85% (callback rate)
    • LLM/RAG Data Scientist: 25% → 70% (callback rate)
    • Microsoft Applied Scientist 2: 15% → 45% (callback rate due to junior level)

    Next Steps

    1

    Fix Format and Basic Information

    Fix Contact Information (15 minutes)

    • Change email to willy.lin@gmail.com or similar professional format
    • Add phone number: +886-XXX-XXX-XXX
    • Add LinkedIn URL: linkedin.com/in/willylin (or create LinkedIn if you don't have one)

    Choose Target Role & Create Focused Version (30 minutes)

    • Decide which of the three roles is your primary target (recommendation: Foxconn NLP Engineer)
    • Update title line to match: "NLP Engineer" OR "AI Engineer" OR "LLM Data Scientist"
    • Remove subtitle "LARGE LANGUAGE MODELS & DATA PIPELINES" entirely

    Rewrite Summary to Result-Focus (45 minutes)

    • Use the optimized version provided for your chosen role
    • Add your actual performance metrics if available (model improvement %, latency reduction, etc.)
    • Keep to 3-4 sentences maximum

    Cut to One Page by Removing 8+ Bullets (60 minutes)

    • Select your 4-5 strongest, most relevant achievements only
    • Merge "Internship Experience" into main "Experience" section
    • Remove italicized project note
    • Delete redundant/low-impact bullets
    2

    Transform All Bullets to XYZ Framework

    Rewrite Each Remaining Bullet (90 minutes for 4-5 bullets)

    For each bullet, answer:

    • [X] = What did you accomplish? (outcome)
    • [Y] = How did you measure it? (metric)
    • [Z] = How did you do it? (method)

    Format: Accomplished [X] as measured by [Y] by doing [Z]

    Add Your Actual Metrics (60 minutes)

    If you don't have exact numbers, use conservative estimates:

    • Model improvement: "improved by approximately 15-25%"
    • Time reduction: "reduced from 7 days to 5 days per iteration"
    • Latency: "achieved <50ms p95 latency"
    • Never fabricate—but do quantify based on your observations

    Rewrite 1-2 Internship Bullets (30 minutes)

    Apply XYZ framework to internship work showing business impact

    3

    Create Three Customized Versions for Three Role Types

    You cannot use one resume for all three roles. Create three versions:

    Version A: NLP Engineer (Foxconn)

    Primary recommendation

    • Title: "NLP Engineer"
    • Skills: ADD SpaCy, NLTK, Gensim, Word2Vec, text classification, NER
    • Summary focus: Multilingual NLP, text processing

    Version B: LLM Data Scientist (RAG Role)

    • Title: "LLM Data Scientist" or "GenAI Engineer"
    • Skills: ADD LangChain (study), RAG, LlamaIndex (familiar), Azure OpenAI (familiar)
    • Summary focus: RAG systems, knowledge retrieval, evaluation

    Version C: AI Engineer (Microsoft)

    • Title: "AI Engineer"
    • Skills: Emphasize ML systems, model serving
    • Summary focus: LLM training, optimization, large-scale systems
    4

    Apply to 5-10 Target Roles

    After Resume is Optimized:

    • Start with Foxconn NLP Engineer role (strongest fit)
    • Apply to similar NLP/LLM roles at Taiwan tech companies
    • Use customized version for each role type
    • Track applications in spreadsheet
    5

    Prepare Interview Stories Using STAR Method

    For Each Major Achievement:

    Prepare 2-3 minute stories following STAR framework:

    • Situation: What was the context/problem?
    • Task: What was your specific responsibility?
    • Action: What did you do? (step-by-step)
    • Result: What happened? (quantified outcome)

    Example for your Qwen3-8B GRPO optimization work:

    • S: Qwen3-8B base model had inconsistent persona adherence and occasional hallucinations
    • T: Improve conversational quality for production virtual assistant platform
    • A: Implemented GRPO using Qwen-14B reward model with three custom rubrics targeting hallucination, coherence, persona
    • R: Achieved 23% improvement in human evaluation scores, reduced hallucination rate by 18%, enabling production deployment

    Reminders

    Do's

    • Customize for each application - Change 2-3 bullets to match JD
    • Follow up after applying - Email recruiter 5-7 days later
    • Be ready to explain every metric - Interviewers will ask
    • Keep examples confidential - Don't mention internal project names
    • Show genuine enthusiasm - Reference specific company initiatives

    Don'ts

    • Don't apply without customization - Quality > quantity
    • Don't exaggerate metrics - Be ready to support with data
    • Don't badmouth previous employers - Stay professional
    • Don't ignore cultural fit - Research company values

    Final Thought

    Your experience is great for an early-career AI professional.

    Your previous resume wasn't telling this story effectively. It buried your strongest achievements under 13 bullets of duty descriptions, confused recruiters with mixed positioning, and failed to quantify any results.

    Your new resume will showcase exactly what makes you valuable: you can train production LLMs, optimize them through GRPO, deploy them at scale, and deliver measurable improvements.

    You have the experience. Now you have the positioning. Go get the offer.

    Good luck! 🚀

    Your Feedback Matters

    I hope this review has been valuable in strengthening your application.

    Share Your Feedback

    Your honest feedback helps me improve the service. Testimonials help other job seekers discover this service. I read every response and continuously refine my approach.

    Trustpilot

    Public reviews help build credibility. Your review helps other professionals make informed decisions.

    Why is the Trustpilot score 3.8?

    I've just started a new business and Trustpilot applies an initial weighting for new businesses, which can temporarily lower early scores. As more real client reviews are added, the score adjusts to reflect actual service quality.