en 
  • EN
  • ES
  • FR
${countryLabel} 
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
LanguageCert validation - banner

LanguageCert Ongoing Research Programme

LanguageCert Ongoing Research Programme

LanguageCert’s ongoing research programme focuses on the development, validation, calibration, and performance analysis of LanguageCert test materials. The programme uses Classical Test Theory (CTT) and Rasch Measurement (RM) statistics to ensure accurate standard setting, item calibration, and overall test quality. In addition, qualitative studies, such as candidate responses and reactions to the examination, are undertaken. The research programme aims to continuously refine the test material performance and fitness-for-purpose, and enhance and ensure reliability and validity. Research findings are regularly publicized on the LanguageCert website as well as in reputable peer-reviewed journals.


Statistical methods we employ

Classical Test Theory (CTT)

a. Reliability Analysis: The research programme utilises classical reliability measures, such as Cronbach’s alpha, to evaluate the internal consistency of the test.

b. Item Facility: CTT statistics are used to determine the facility of items in all tests.

c. Item Discrimination: CTT, such as the point-biserial correlation, measure the extent to which items differentiate between high and low performers. This analysis identifies items that effectively discriminate between individuals with different language proficiency levels.

Item Response theory (IRT) and Rasch analysis

a. Item Calibration: The research programme uses IRT, such as the Rasch model, to calibrate items and establish their difficulty and discrimination parameters.

b. Test Equating: IRT facilitates test equating, which ensures score comparability across different test forms or administrations. Equating ensures that test takers who take different versions of the test are measured on the same scale.

c. Person Ability and Item Difficulty Measurement: IRT enables the estimation of person abilities and item difficulties on a common scale, allowing for effective measurement of language proficiency.

Standard Setting

Angoff Method: The research programme employs the Angoff method, a widely used standard setting technique, where panels of experts review items and estimate the minimum level of proficiency required to answer them correctly.

Performance Analysis

a. Differential Item Functioning (DIF): The research programme investigates DIF to identify potential biases in item functioning based on test takers' characteristics (e.g., gender, ethnicity, first language). This analysis ensures fairness and equity in the test administration.

b. Item and Test Analysis: The ongoing research programme conducts in-depth item and test analyses to identify problematic items, assess the test's psychometric properties, and gather evidence of validity.

Long-term research and validation programme

LanguageCert plans and conducts research studies using various methodologies which include interviews with raters and examiners, interviews with test users (primary stakeholders), expert panels and statistical analyses in an on-going programme. External validation is achieved through independent reviews (e.g., Ecctis, ALTE, CRELLA) of the LanguageCert qualifications. Studies currently in progress include: CEFR Referencing; alignment to the Canadian Language Benchmarks (CLB); and confirming the reliability, consistency, and the positive effect of the decisions made through the LanguageCert tests for test stakeholders.

Concordance Study

Concordance studies play a crucial role in language assessment and testing by investigating the comparability and alignment of different language proficiency tests. These studies aim to establish empirical evidence of the equivalence or similarity between test scores from different language assessments, enabling informed decision-making regarding test interpretation and usage.

LanguageCert conducted a concordance study between LanguageCert Academic and IELTS Academic. A separate concordance study between LanguageCert General and IELTS General Training is on-going. The studies have been conducted in cooperation with the Centre for Research in English Language Learning and Assessment (CRELLA) at the University of Bedfordshire and overseen by a Concordance Studies Review panel, consisting of a team of leading academics. The LanguageCert Academic concordance study  spanned three years and concluded in 2024, having involved more than 1000 test takers, each sitting both tests.

The concordance study has included comparisons between the content of the LanguageCert Academic and General tests and their counterparts, widely accepted for the same purposes: IELTS Academic and IELTS General Training. It also involves the collection of test score data from test takers who have taken both LanguageCert and IELTS tests. Over time, LanguageCert expects to extend the concordance study to the full range of international tests recognised for similar purposes.

The studies confirmed that the LanguageCert and IELTS tests cover similar content, using similar task types to represent the language needs of students (Academic) or those shared by many other groups of migrants (General/ General Training). Content comparisons and the data collected demonstrate that performance on the two LanguageCert tests is highly predictive of performance on their IELTS counterparts with very strong correlations between the results: r =.87 for Academic and r =.89 for General. 

Score distribution summary statistics

 

Μean

SD

Skewness

Kurtosis

Min  Max
LanguageCert Academic

63.62

12.46

-0.11

-0.08

22

 96

IELTS Academic

6.26

0.88

-0.06

-0.14

3

 8.5
LanguageCert General 67.59 12.50 -0.83
1.31 18  89
IELTS General Training  6.76 1.06 -0.58
0.17 3  9
Note: For the Academic cohort, the sample size was 1008. For the General cohort, the sample size was 181.

Correlations 

 

Overall r

Reading r

Writing r

Listening r

Speaking r
Academic (n = 1008)

.87

.76

.71

.72

.71

General (n = 181)

.89

.77

.76

.79

.74

Note: r = correlation. All correlations were statistically significant at the p < .001 level.

The strong correlation for Overall performance (r=.87) indicates the two tests measure similar underlying abilities (Knoch, 2021). The correlations for individual subscales (r > .7) suggest that LanguageCert exams perform in accordance with the language test in comparison, designed for similar purposes.

Over time, LanguageCert expects to extend the concordance study to the full range of international tests recognised for similar purposes and will be publishing the findings in extensive reports.

Download the preliminary concordance report here.

Updated detailed reports will become available upon finalisation of the studies.

Knoch, U. (2021). A guide to English language policy making in higher education. International Education Association of Australia (IEAA). www.ieaa.org.au.