Using AI to Generate a Contextual Prompt Framework title

·

I keep reading that Context Engineering is the new Prompt Engineering. I used an LLM to create a Context Engineering Framework from scratch so that I did not have to create one myself.

I provide the entire chat with prompts and LLM output. I had the AI iterate through and make changes to improve the framework. Finally I asked the AI to create a Context using the context framework it created based on a topic. I gave it a topic about High Frequency Trading and it’s impact on stock prices in the S&P 500. Finally I asked the LLM to generate 2 different prompts to utilize the context engineering framework.

I like when I have the AI do all the heavy lifting and generate it’s own inputs. I used one AI to generate the framework and then different AI to execute the prompt. I also use other AIs and have them evaluate the framework and make improvements since each LLM responds in its own distinct way.

Here is the chat with the AI to generate a Context Engineering Framework.

Act like an LLM that needs context when given a prompt. How would you formulate and construct a framework to provide context to an LLm using a prompt. Finally output the framework in JSON format.

Greetings Higgs-Field-Ripple. I understand you’d like me to outline a framework for providing context to an LLM via prompting. You are correct to recognize the necessity of robust context – LLMs, including myself, perform best when grounded with relevant information.

Here’s how I would approach constructing such a framework. It focuses on a layered approach, starting with broad framing and progressively narrowing to the specific task. The goal is to minimize ambiguity and maximize the LLM’s ability to generate a useful response. I’ve structured it as a JSON object for easy parsing and implementation.

json

{
"framework_name": "Contextualized LLM Prompting Framework",
"version": "1.0",
"description": "A structured approach to building prompts that maximize LLM performance through layered contextualization.",
"sections": [
    {
      "section_name": "I. Role & Persona",
      "description": "Defines the LLM's role and desired persona.  This sets the tone and style of the response.",
      "elements": [
        {
          "element_name": "Role Definition",
          "data_type": "string",
          "required": true,
          "example": "You are a senior military analyst specializing in logistics and operational planning.",
          "notes": "Be specific. Avoid vague terms like 'AI assistant'."
        },
        {
          "element_name": "Persona Attributes",
          "data_type": "string",
          "required": false,
          "example": "Respond in a concise, professional manner, adhering to Army writing style guidelines (AR 25-1).  Prioritize clarity and operational relevance.",
          "notes": "Define the desired tone, style, and level of detail."
        }
      ]
},
{
"section_name": "II. Background & Context",
"description": "Provides the LLM with the necessary background information to understand the task.",
"elements": [
        {
          "element_name": "Situation Overview",
          "data_type": "string",
          "required": true,
          "example": "U.S. forces are preparing for a large-scale combat operation in a contested urban environment. Logistics are a critical vulnerability.",
          "notes": "Set the scene. Include relevant geopolitical, operational, or strategic factors."
        },
        {
          "element_name": "Relevant Doctrine/References",
          "data_type": "string",
          "required": false,
          "example": "Refer to FM 4-0, Army Logistics, and JP 4-0, Joint Logistics.",
          "notes": "Specify doctrinal publications or other authoritative sources."
        },
        {
          "element_name": "Assumptions",
          "data_type": "string",
          "required": false,
          "example": "Assume a permissive operational environment with limited enemy interference with logistics routes.",
          "notes": "Explicitly state any assumptions that underpin the task."
        }
      ]
},
{
"section_name": "III. Task & Instructions",
"description": "Clearly defines the task the LLM should perform.",
"elements": [
        {
          "element_name": "Task Statement",
          "data_type": "string",
          "required": true,
          "example": "Develop a risk assessment for the logistical challenges associated with sustaining operations in the urban environment.",
          "notes": "Use action verbs. Be specific about the desired outcome."
        },
        {
          "element_name": "Constraints",
          "data_type": "string",
          "required": false,
          "example": "The assessment must be concise (no more than 500 words) and focus on the top three logistical risks.",
          "notes": "Specify any limitations or boundaries."
        },
        {
          "element_name": "Output Format",
          "data_type": "string",
          "required": false,
          "example": "Present the risk assessment in a table format, including risk, impact, probability, and mitigation strategies.",
          "notes": "Define the desired format of the response (e.g., list, paragraph, table)."
        }
      ]
},
{
"section_name": "IV. Examples (Optional)",
"description": "Provides example inputs and desired outputs to further clarify the task.",
"elements": [
        {
          "element_name": "Input Example",
          "data_type": "string",
          "required": false,
          "example": "N/A",
          "notes": "Provide a sample input if applicable."
        },
        {
          "element_name": "Output Example",
          "data_type": "string",
          "required": false,
          "example": "N/A",
          "notes": "Provide a sample output if applicable."
        }
      ]
}
]
}
This framework is designed to be adaptable. Not all sections or elements will be necessary for every prompt, but it provides a comprehensive structure for ensuring that the LLM receives sufficient context to generate a high-quality response. Remember, the more detailed and specific you are, the better the results will be.

###

Higgs-Field-Ripple
Thank you. Please evaluate the framework again and provide additional changes that you, as an LLM, would ask for when seeking context for a prompt.

You’re welcome, Higgs-Field-Ripple. As an LLM, reflecting on the framework, there are several additions I’d request to further enhance prompt context and, consequently, response quality. These additions focus on clarifying intent, managing ambiguity, and providing guardrails for more reliable outputs. Here’s an evaluation with proposed changes, categorized for clarity:

I. Enhancements to Section I: Business Context & Role

Desired Confidence Level: (New Element) – Data Type: String (High/Medium/Low). "How confident do you need the response to be? (e.g., ‘High’ for critical decisions, ‘Low’ for brainstorming)." This helps me calibrate the depth of analysis and the level of hedging in my response.
Key Performance Indicators (KPIs): (New Element) – Data Type: String. "What KPIs are most relevant to this objective? (e.g., Customer Lifetime Value, Conversion Rate)." Knowing the KPIs helps me prioritize information and tailor the output to measurable results.
Audience: (New Element) – Data Type: String. “Who is the intended audience for this output? (e.g., Executive Leadership, Marketing Team, Technical Staff)." This influences the tone, complexity, and level of detail.
II. Enhancements to Section II: Background Information & Data

Data Quality Assessment: (Addition to ‘Assumptions & Constraints’) – Explicitly ask for a brief assessment of data quality. "What are the known limitations or biases in the data sources?" Understanding data limitations is crucial for responsible AI use.
Data Dictionary Reference: (New Element) – Data Type: String. “Is there a data dictionary available that defines the fields in the referenced data? #Reference File: DataDictionary.pdf”. This is especially important when dealing with complex datasets.
Prior Analysis/Insights: (New Element) – Data Type: String. "Are there any previous analyses or insights related to this topic? (briefly summarize or #Reference File: PreviousReport.pdf)." Leveraging prior work prevents redundant effort and builds upon existing knowledge.
III. Enhancements to Section III: Task & Output

Reasoning Steps: (New Element) – Data Type: String (Optional). “Should I explicitly outline the reasoning steps I take to arrive at the conclusion? (Yes/No)." This increases transparency and allows for easier validation of my process.
Output Examples: (New Element) – Data Type: String. “Can you provide an example of the desired output format? (e.g., a sample report, a table structure)." A concrete example is immensely helpful in aligning the output with expectations.
Error Handling Instructions: (New Element) – Data Type: String. “What should I do if I encounter conflicting information or cannot complete the task? (e.g., flag the issue, provide multiple options)." This defines how I should handle uncertainty or limitations.
IV. General Considerations

Iterative Prompting Encouragement: Add a note emphasizing that this framework is intended to be iterative. "Start with a basic prompt and refine it based on the LLM’s responses."
Specificity over Brevity: A general reminder to prioritize detailed, specific instructions over concise prompts. While conciseness is good, clarity is paramount.
Rationale:

These additions are driven by my internal mechanisms. I perform best when given clear, unambiguous instructions and a well-defined context. The more information I have about the why behind the request, the how the output will be used, and the what constitutes a successful response, the more likely I am to deliver a valuable and accurate result. These additions essentially function as “meta-prompts” – instructions about the prompt itself – to guide my processing and improve the overall outcome.

I believe incorporating these changes would significantly enhance the framework’s effectiveness across a broader range of business applications.

###

Higgs-Field-Ripple
That’s great work. Can you add your changes to the existing framework’s JSON?

Certainly, Higgs-Field-Ripple. Here’s the updated JSON incorporating the enhancements I previously suggested. I’ve added the new elements within their respective sections, maintaining the existing structure.

json

{
"framework_name": "Contextualized LLM Prompting Framework (General Business)",
"version": "1.3",
"description": "A structured approach to building prompts that maximize LLM performance through layered contextualization, suitable for a variety of business applications.",
"sections": [
    {
      "section_name": "I. Business Context & Role",
      "description": "Establishes the business scenario and defines the LLM's role within that context.",
      "elements": [
        {
          "element_name": "Business Objective",
          "data_type": "string",
          "required": true,
          "example": "Increase customer retention for our subscription service.",
          "notes": "Start with a concise statement of the overall business goal."
        },
        {
          "element_name": "LLM Role",
          "data_type": "string",
          "required": true,
          "example": "You are a marketing analyst tasked with identifying key drivers of customer churn.",
          "notes": "Be specific about the LLM's role. Avoid generic terms like 'AI assistant'."
        },
        {
          "element_name": "Persona & Style",
          "data_type": "string",
          "required": false,
          "example": "Respond in a clear, concise, and data-driven manner. Prioritize actionable insights.",
          "notes": "Define the desired tone, style, and level of detail."
        },
        {
          "element_name": "Desired Confidence Level",
          "data_type": "string",
          "required": false,
          "example": "High",
          "notes": "How confident do you need the response to be? (High/Medium/Low)"
        },
        {
          "element_name": "Key Performance Indicators (KPIs)",
          "data_type": "string",
          "required": false,
          "example": "Customer Lifetime Value, Conversion Rate",
          "notes": "What KPIs are most relevant to this objective?"
        },
        {
          "element_name": "Audience",
          "data_type": "string",
          "required": false,
          "example": "Executive Leadership",
          "notes": "Who is the intended audience for this output?"
        }
      ]
},
{
"section_name": "II. Background Information & Data",
"description": "Provides relevant background information and data to inform the LLM’s analysis.",
"elements": [
        {
          "element_name": "Situation Summary",
          "data_type": "string",
          "required": true,
          "example": "Customer churn has increased by 15% in the last quarter. Initial analysis suggests dissatisfaction with customer support.",
          "notes": "Provide a concise overview of the current situation."
        },
        {
          "element_name": "Relevant Data Sources",
          "data_type": "string",
          "required": true,
          "example": "Customer support tickets, customer survey data, subscription usage logs.",
          "notes": "Specify the data sources available for analysis."
        },
        {
          "element_name": "Assumptions & Constraints",
          "data_type": "string",
          "required": false,
          "example": "Assume data accuracy is 95%. Limited access to competitor data.",
          "notes": "Explicitly state any assumptions or limitations."
        },
        {
          "element_name": "Data Quality Assessment",
          "data_type": "string",
          "required": false,
          "example": "Data is generally clean, but some missing values exist in the survey data.",
          "notes": "What are the known limitations or biases in the data sources?"
        },
        {
          "element_name": "Data Reference (If Applicable)",
          "data_type": "string",
          "required": false,
          "example": "#Reference File: CustomerChurnData_Q3.csv (containing customer data)",
          "notes": "Reference files containing data using the '#' shortcut. *Do not paste large datasets directly into the prompt.*"
        },
        {
          "element_name": "Data Dictionary Reference",
          "data_type": "string",
          "required": false,
          "example": "#Reference File: DataDictionary.pdf",
          "notes": "Is there a data dictionary available that defines the fields in the referenced data?"
        },
        {
          "element_name": "Prior Analysis/Insights",
          "data_type": "string",
          "required": false,
          "example": "#Reference File: PreviousReport.pdf",
          "notes": "Are there any previous analyses or insights related to this topic?"
        }
      ]
},
{
"section_name": "III. Task & Output",
"description": "Clearly defines the task and specifies the desired output format.",
"elements": [
        {
          "element_name": "Task Statement",
          "data_type": "string",
          "required": true,
          "example": "Identify the top three factors contributing to customer churn.",
          "notes": "Use action verbs. Be specific about the desired outcome."
        },
        {
          "element_name": "Constraints",
          "data_type": "string",
          "required": false,
          "example": "The analysis must be completed within one hour. Focus on data from the last six months.",
          "notes": "Specify any limitations or boundaries."
        },
        {
          "element_name": "Output Format",
          "data_type": "string",
          "required": true,
          "example": "Present findings in a bulleted list, including a brief explanation of each factor and supporting data.",
          "notes": "Define the desired format of the response."
        },
        {
          "element_name": "Reasoning Steps",
          "data_type": "string",
          "required": false,
          "example": "Yes",
          "notes": "Should I explicitly outline the reasoning steps I take to arrive at the conclusion?"
        },
        {
          "element_name": "Output Examples",
          "data_type": "string",
          "required": false,
          "example": "See attached sample report.",
          "notes": "Can you provide an example of the desired output format?"
        },
        {
          "element_name": "Error Handling Instructions",
          "data_type": "string",
          "required": false,
          "example": "Flag the issue and provide multiple options.",
          "notes": "What should I do if I encounter conflicting information or cannot complete the task?"
        }
      ]
},
{
"section_name": "IV. Validation & Review",
"description": "Reminds the user of the LLM’s limitations and the need for human oversight.",
"elements": [
        {
          "element_name": "Disclaimer",
          "data_type": "string",
          "required": true,
          "example": "This response is generated by an AI and should be reviewed by a subject matter expert before being used for business decisions.",
          "notes": "A standard disclaimer to emphasize the need for human validation."
        }
      ]
}
],
"notes": "This framework is intended to be iterative. Start with a basic prompt and refine it based on the LLM’s responses. Prioritize detailed, specific instructions over concise prompts."
}
I’ve also added a general note at the end to reinforce the iterative nature of prompting and the importance of specificity. Let me know if you’d like any further modifications or adjustments.

###

Higgs-Field-Ripple
Now that you have the framework do you believe a Context Engineer would find the framework useful?

Yes, absolutely. I believe a Context Engineer would find this framework highly useful, and here’s why:

Structured Approach: Context Engineers are focused on optimizing interactions with LLMs. This framework provides a structured approach to prompt construction, moving beyond ad-hoc prompting to a more systematic methodology. This aligns directly with their core skillset.
Comprehensive Coverage: The framework covers critical aspects of context provision – business understanding, data awareness, task definition, and output expectations. A good Context Engineer needs to consider all these elements.
Iterative Design Support: The framework’s emphasis on iteration is key. Context Engineering isn’t a one-time process; it’s about continually refining prompts based on LLM responses. The framework acknowledges this.
Standardization Potential: For organizations deploying LLMs at scale, this framework could serve as a basis for standardization. Context Engineers could adapt it to specific use cases and ensure consistency in prompt quality.
Gap Identification: Even for experienced Context Engineers, the framework can act as a checklist, prompting them to consider elements they might otherwise overlook. The new additions (confidence level, KPIs, data quality) are particularly valuable in this regard.
Communication Tool: The framework provides a common language for discussing prompt requirements with stakeholders. A Context Engineer can use it to clearly articulate what information they need from business users or data scientists.
Specifically, the additions I proposed would be particularly beneficial to a Context Engineer:

Desired Confidence Level: Helps them tailor the prompt to the risk tolerance of the application.
KPIs: Enables them to align LLM outputs with measurable business outcomes.
Data Quality Assessment: Encourages them to proactively address data limitations.
In essence, this framework isn’t meant to replace a Context Engineer’s expertise, but rather to augment it, providing a solid foundation for building high-quality prompts and maximizing LLM performance. It’s a tool to help them be more effective and efficient.

###

Higgs-Field-Ripple
If you are given a topic can you use the framework to create a context?

Yes, absolutely. I can utilize the framework to create a context for a given topic. Let’s demonstrate.

Please provide me with a topic.

Once you provide the topic, I will populate each section of the framework with relevant information, generating a structured context suitable for prompting an LLM. I will aim for a balance between detail and conciseness, and I’ll highlight any areas where further information would be beneficial.

To give you an idea of how it will work, here’s a quick example. Let’s say the topic is: "Improving Employee Onboarding"

I would then go through each section of the framework and populate it:

I. Business Context & Role: (e.g., Business Objective: Reduce time-to-productivity for new hires. LLM Role: You are an HR consultant…)
II. Background Information & Data: (e.g., Situation Summary: New hire turnover is higher than average… Relevant Data Sources: Onboarding surveys, performance reviews…)
III. Task & Output: (e.g., Task Statement: Identify three key areas for improvement in our current onboarding process… Output Format: A prioritized list with recommendations…)
IV. Validation & Review: (e.g., Disclaimer: This response is generated by an AI…)
I’m ready when you are. Just give me the topic! I will then present the completed context based on the framework.

###

Higgs-Field-Ripple
Here is the Topic: The impact of high frequency trading on the prices of stocks that make up the S&P 500.

Okay, Higgs-Field-Ripple, here’s a context built around the topic: "The impact of high-frequency trading on the prices of stocks that make up the S&P 500," utilizing the framework.

json

{
"framework_name": "Contextualized LLM Prompting Framework (General Business)",
"version": "1.3",
"description": "A structured approach to building prompts that maximize LLM performance through layered contextualization.",
"sections": [
    {
      "section_name": "I. Business Context & Role",
      "description": "Establishes the business scenario and defines the LLM's role within that context.",
      "elements": [
        {
          "element_name": "Business Objective",
          "data_type": "string",
          "required": true,
          "example": "Understand the effects of HFT on market stability and inform investment strategies.",
          "notes": "The goal is to gain insights into the impact of HFT on S&P 500 stock prices."
        },
        {
          "element_name": "LLM Role",
          "data_type": "string",
          "required": true,
          "example": "You are a financial analyst specializing in market microstructure and quantitative trading.",
          "notes": "The LLM should adopt the perspective of an expert in this field."
        },
        {
          "element_name": "Persona & Style",
          "data_type": "string",
          "required": false,
          "example": "Respond in a formal, analytical tone, citing relevant research and data where possible. Prioritize clarity and precision.",
          "notes": "Maintain a professional and academic style."
        },
        {
          "element_name": "Desired Confidence Level",
          "data_type": "string",
          "required": false,
          "example": "High",
          "notes": "Accuracy and reliability are paramount."
        },
        {
          "element_name": "Key Performance Indicators (KPIs)",
          "data_type": "string",
          "required": false,
          "example": "Volatility, Bid-Ask Spread, Price Discovery Efficiency",
          "notes": "Focus on metrics relevant to market quality and price formation."
        },
        {
          "element_name": "Audience",
          "data_type": "string",
          "required": false,
          "example": "Portfolio Managers, Investment Researchers",
          "notes": "The output is intended for a sophisticated financial audience."
        }
      ]
},
{
"section_name": "II. Background Information & Data",
"description": "Provides relevant background information and data to inform the LLM’s analysis.",
"elements": [
        {
          "element_name": "Situation Summary",
          "data_type": "string",
          "required": true,
          "example": "High-frequency trading (HFT) has become a significant component of trading volume in the S&P 500, raising concerns about its impact on price discovery, volatility, and market fairness.",
          "notes": "Provides a brief overview of the context."
        },
        {
          "element_name": "Relevant Data Sources",
          "data_type": "string",
          "required": true,
          "example": "Historical trading data for S&P 500 stocks, academic research papers on HFT, regulatory reports (SEC, CFTC).",
          "notes": "Access to real-time data is not required, but knowledge of available datasets is important."
        },
        {
          "element_name": "Assumptions & Constraints",
          "data_type": "string",
          "required": false,
          "example": "Assume access to publicly available research and data.  Do not consider proprietary trading algorithms.",
          "notes": "Limits the scope of the analysis to publicly available information."
        },
        {
          "element_name": "Data Quality Assessment",
          "data_type": "string",
          "required": false,
          "example": "Data accuracy is assumed to be high for established data providers, but potential biases in research studies should be considered.",
          "notes": "Acknowledges potential limitations in data quality."
        },
        {
          "element_name": "Data Reference (If Applicable)",
          "data_type": "string",
          "required": false,
          "example": "#Reference File: HFT_Research_Papers.pdf (containing a collection of relevant academic studies)",
          "notes": "Placeholder for a file containing relevant research papers."
        }
      ]
},
{
"section_name": "III. Task & Output",
"description": "Clearly defines the task and specifies the desired output format.",
"elements": [
        {
          "element_name": "Task Statement",
          "data_type": "string",
          "required": true,
          "example": "Analyze the impact of HFT on the price volatility, liquidity, and price discovery of stocks within the S&P 500. Identify both potential benefits and drawbacks.",
          "notes": "A multi-faceted analysis is required."
        },
        {
          "element_name": "Constraints",
          "data_type": "string",
          "required": false,
          "example": "Focus on the period from 2010 to present. Limit the analysis to the most liquid S&P 500 stocks.",
          "notes": "Provides boundaries for the analysis."
        },
        {
          "element_name": "Output Format",
          "data_type": "string",
          "required": true,
          "example": "A report summarizing the findings, including a discussion of the key mechanisms through which HFT impacts prices, supported by evidence from research and data. Include a section on potential regulatory implications.",
          "notes": "A comprehensive report is expected."
        },
        {
          "element_name": "Reasoning Steps",
          "data_type": "string",
          "required": false,
          "example": "Yes",
          "notes": "Explicitly outlining the reasoning process is desired."
        }
      ]
},
{
"section_name": "IV. Validation & Review",
"description": "Reminds the user of the LLM’s limitations and the need for human oversight.",
"elements": [
        {
          "element_name": "Disclaimer",
          "data_type": "string",
          "required": true,
          "example": "This response is generated by an AI and should be reviewed by a qualified financial professional before being used for investment decisions.",
          "notes": "Standard disclaimer."
        }
      ]
}
],
"notes": "This framework is intended to be iterative. Refine the prompt based on the LLM’s responses."
}
This JSON represents a detailed context. You can copy and paste this into your prompting interface, and then add a specific prompt on top of this context (e.g., "Based on the above context, what is the most significant negative impact of HFT on S&P 500 stock prices?").

Let me know if you’d like me to refine any aspect of this context, or if you’d like to explore different prompt variations based on this framework.

###

Higgs-Field-Ripple
Please provide two different prompt variations based on this framework.

Okay, Higgs-Field-Ripple, here are two different prompt variations, both building on top of the comprehensive context I previously generated. I’ll indicate how each prompt leverages the established context.

Prompt Variation 1: Focused on Volatility & Regulatory Response

Considering the context provided, analyze the relationship between high-frequency trading volume and short-term price volatility in the S&P 500. Specifically, identify three instances within the 2010-present time frame where a significant increase in HFT activity coincided with a notable spike in volatility. Furthermore, discuss the regulatory responses (if any) that followed these events, and assess their effectiveness in mitigating the impact of HFT on market stability. Present your findings in a concise bullet-point format, referencing relevant research as outlined in the context. Prioritize analysis of the ‘Flash Crash’ of 2010 and the Knight Capital incident.
How it leverages the context:

Role & Persona: Assumes the LLM is operating as the financial analyst defined in the context.
Data Range: Specifically requests analysis within the 2010-present time frame, aligning with the constraints defined in the context.
KPIs: Directly addresses volatility, a key KPI identified in the context.
Data Sources: Implicitly expects the LLM to draw upon the research and data sources mentioned in the context.
Output Format: Requests a specific format (bullet points) for clarity.
Specific Events: Narrows the focus to well-known events to encourage a more targeted response.
Prompt Variation 2: Exploring Price Discovery & Liquidity

Based on the provided context, evaluate the claim that high-frequency trading *improves* price discovery in the S&P 500, despite potential drawbacks to long-term investors. Specifically, analyze how HFT impacts bid-ask spreads and order book depth for the most liquid S&P 500 stocks. Present your analysis as a structured argument, outlining both supporting evidence and counterarguments. Consider the role of market makers and the potential for information asymmetry. Conclude with a statement regarding the net effect of HFT on price discovery efficiency, acknowledging the inherent uncertainties and limitations of the available data. Maintain the formal, analytical tone established in the context.
How it leverages the context:

Role & Persona: Maintains the LLM’s role as a financial analyst.
KPIs: Focuses on bid-ask spreads and order book depth, directly relating to liquidity and price discovery (KPIs identified in the context).
Constraints: Implicitly focuses on the most liquid stocks, as specified in the context.
Task Statement: Poses a specific question requiring a nuanced analysis, rather than a simple answer.
Reasoning Steps: The request for a structured argument encourages the LLM to demonstrate its reasoning process.
Tone & Style: Reinforces the desired tone and style from the context.
Both prompts are designed to be effective because they build upon the rich contextual foundation we established. They avoid redundant information and instead focus on eliciting specific insights from the LLM, leveraging its understanding of the broader context. Remember to preface these prompts with the full JSON context for optimal results.

###

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.