Blog

Debunking Rejection

Debunking Rejection

Who else has been rejected? From a job? A client opportunity? A personal relationship?

It’s normal after rejection to wonder: if I had done x, y, or z differently, would I have been rejected?

This week CArtLab Solutions is going to borrow hypothesis testing from statistics to help find ways to work around the “icky” feeling that comes with rejection in various areas of life.

First it is important to question: what type of information or data does this moment provide?

Is it quantitative or qualitative?

Is the data continuous or discrete?

Does the data represent a normal distribution, or is it skewed with data being collected from a specific bias?

In the type of hypothesis testing used in this article, a researcher will apply experiments to prove/disprove a hypothesis on data that is quantitiative, continuous, normally distributed, and population is 10x bigger than the sample size.

The researcher will collect data to define a mean (μ), and define the dataset according to it’s variance (σ^2):

Curves representing different data distributions

The data collected must be repeatable and be compared to a known fact/property/ parameter about the same system.

For example if facing a rejection, one might immediately jump to a fear, misrepresented as a fact: “I’m not good enough” or “it’s all my fault” or “I should have done x, y, or z”.

Instead, hypothesis testing teaches us to collect more data to confirm repeatability of results–> collecting data to confirm the null hypothesis:

-Apply to more jobs

-Pitch the product to other clients

-Keep dating people looking for another relationship

Also, add another known fact to compare to our hypothesis test, and this is the research hypothesis, H1:

-When applying to 6 colleges, the applicant got accepted to 3, and rejected by 3, and this rejection was irrelevant to completing the college degree

-If a person does not pitch the product, they will not get a client

-If a person does not shower that day, they will not get a date that turns into a relationship

Why choose two representations of the hypothesis?

It is how we check our assumptions of our hypothesis, to confirm data are not noise and instead represent a signal for repeatable results that should be noted.

For the sake of this article, we will focus on the rejection region method by choosing a critical value of alpha.

This step is about the confidence of the result: does the researcher want to be 95% confident of their hypothesis after the testing? 90%? In the case of 95%, one would choose an alpha value of 5% or 0.05, for 90%, one would choose 10% or 0.10.

It is up to the researcher.

This means, if a research hypothesis tested has repeatable results within the rejection region, the null hypothesis is not true, and the control is held by the tester, not by the applicant.

Rejection region is defined by the researcher designing the experiment

Data falling into the rejection region may not be something that an applicant has control over, rather something they simply influence in a minor way.

Next time you or a friend gets a rejection, remember, is this enough data to jump to fear-based conclusions? Is the data skewed because of a situation out of your control? Is it simply a data-set with a large alpha value, thus low confidence in the result?

At CArtLab Solutions, we understand the complexity of factors impacting a rejection region, and accept that when our our alpha values of confidence do not match a lead, rejection may be the only option to move forward, onto collecting more data from more clients.