Directions for Teaching and Learning

AI tools present an oppor­tu­ni­ty for mean­ing­ful dis­cus­sions about aca­d­e­m­ic integri­ty with stu­dents. At NIC, we rec­og­nize that AI can be used eth­i­cal­ly to enhance teach­ing and learn­ing expe­ri­ences. The use of AI for assign­ments or assess­ments does not inher­ent­ly con­sti­tute aca­d­e­m­ic mis­con­duct (Eaton, S., 2023. 6 Tenets of Post-pla­gia­rism). Instruc­tors are encour­aged to explic­it­ly out­line the per­mis­sion lev­els for AI use in their course out­lines and to engage stu­dents in open con­ver­sa­tions about these tech­nolo­gies. These dis­cus­sions sup­port the devel­op­ment of dig­i­tal lit­er­a­cy by address­ing both the risks and ben­e­fits of AI tools.

If an instruc­tor spec­i­fies that no exter­nal assis­tance is per­mit­ted for a grad­ed assign­ment, NIC will con­sid­er any unau­tho­rized use of GenAI tools a form of aca­d­e­m­ic mis­con­duct, as out­lined in the NIC Aca­d­e­m­ic Integri­ty Pol­i­cy. Pol­i­cy pro­vi­sions include the review of aca­d­e­m­ic work for authen­tic­i­ty and orig­i­nal­i­ty and define inap­pro­pri­ate use of dig­i­tal tech­nolo­gies as mis­con­duct.

It is crit­i­cal for instruc­tors to pro­vide clear, indi­vid­u­al­ized state­ments in their course out­lines regard­ing AI use relat­ed to course out­comes and assessments/assignments and grad­ed eval­u­a­tions. Indi­cate clear­ly if AI can or can’t be accessed for idea gen­er­a­tion, for edit­ing, for orga­ni­za­tion, for study­ing etc. so both you and stu­dents know the bound­aries to work with­in. If aca­d­e­m­ic mis­con­duct involv­ing AI is sus­pect­ed, instruc­tors should fol­low NIC’s estab­lished aca­d­e­m­ic integri­ty pro­ce­dures.

AI offers inter­est­ing pos­si­bil­i­ties but also rais­es impor­tant ques­tions about mean­ing­ful learn­ing, how learn­ing is reli­ably mea­sured, and how assess­ments might be redesigned to focus more on learn­ing process­es rather than just the final prod­uct. While shift­ing to authen­tic assess­ments can help address poten­tial mis­use of =AI, it is also like­ly that authen­tic assess­ments will evolve to include col­lab­o­ra­tion with or the inte­gra­tion of AI into stu­dent work in the future.

At NIC, it is essen­tial to out­line expec­ta­tions regard­ing the use of AI tools in the course out­line. These expec­ta­tions should align with oth­er course poli­cies and be rein­forced both in writ­ing and through dis­cus­sions with stu­dents through­out the term. Pro­vid­ing a ratio­nale for how these guide­lines sup­port course learn­ing out­comes fos­ters align­ment, trans­paren­cy, and under­stand­ing. See the NIC AI Assess­ment Scale for lan­guage per each assign­ment as to what ‘lane(s)’ of GenAI is per­mit­ted and where no GenAI is assigned. Also, vis­it the Pro­vid­ing Clar­i­ty and Align­ment: Course Out­line Chart page for fur­ther details on how to align grad­ed course work and the use of GenAI with course learn­ing out­comes.

See exam­ples below:

OPTION 1: Use of arti­fi­cial intel­li­gence tools such as Chat­G­PT, Claude, Copi­lot, Note­bookLM and many oth­er tools to com­plete grad­ed course­work (assign­ments, exams, projects, etc.) in this course is not per­mit­ted in any cir­cum­stances. For the pur­pos­es of this course, the use of GenAI tools for any part or process com­plet­ing grad­ed course­work (includ­ing edit­ing, idea gen­er­a­tion, orga­ni­za­tion, for­mat devel­op­ment etc.) will be con­sid­ered aca­d­e­m­ic mis­con­duct as it vio­lates the prin­ci­ple of stu­dent work being authen­tic and orig­i­nal.

OPTION 2: The use of arti­fi­cial intel­li­gence tools is not per­mit­ted in any grad­ed course assign­ments unless explic­it­ly stat­ed oth­er­wise by the instruc­tor. This includes Chat­G­PT, Copi­lot, Gem­i­ni and oth­er AI tools and pro­grams.

Stu­dents are per­mit­ted to use AI tools as a study tool for course­work only, such as using it to tutor or pre­pare prac­tice ques­tions, but may not be used to cre­ate con­tent for any assessed work or final sub­mis­sion. How­ev­er, stu­dents are ulti­mate­ly account­able for the work they sub­mit. Any con­tent gen­er­at­ed or sup­port­ed by an arti­fi­cial intel­li­gence tool must be eval­u­at­ed for accu­ra­cy and cit­ed appro­pri­ate­ly. Your instruc­tor will pro­vide fur­ther infor­ma­tion in class. If you have ques­tions about this, please speak with your instruc­tor.

Stu­dents are per­mit­ted to use AI tools for the idea stages of course­work only, such as gath­er­ing infor­ma­tion or brain­storm­ing, but may not be used to cre­ate con­tent for any assessed work or final sub­mis­sion. How­ev­er, stu­dents are ulti­mate­ly account­able for the work they sub­mit. Any con­tent gen­er­at­ed or sup­port­ed by an arti­fi­cial intel­li­gence tool must be eval­u­at­ed for accu­ra­cy and cit­ed appro­pri­ate­ly. Your instruc­tor will pro­vide fur­ther infor­ma­tion in class. If you have ques­tions about this, please speak with your instruc­tor.

Stu­dents are per­mit­ted to use GenAI tools as an edi­tor and proof­read­er for course­work only, such as using it to check spelling, sen­tence struc­ture, gram­mar etc., but may not be used to cre­ate con­tent for any assessed work or final sub­mis­sion. How­ev­er, stu­dents are ulti­mate­ly account­able for the work they sub­mit. Any con­tent gen­er­at­ed or sup­port­ed by an arti­fi­cial intel­li­gence tool must be eval­u­at­ed for accu­ra­cy and cit­ed appro­pri­ate­ly. Your instruc­tor will pro­vide fur­ther infor­ma­tion in class. If you have ques­tions about this, please speak with your instruc­tor.

Stu­dents are per­mit­ted to use AI tools to pro­duce out­puts that are then eval­u­at­ed and com­pared. How­ev­er, stu­dents are ulti­mate­ly account­able for the work they sub­mit. Any con­tent gen­er­at­ed or sup­port­ed by an arti­fi­cial intel­li­gence tool must be eval­u­at­ed for accu­ra­cy and cit­ed appro­pri­ate­ly. Your instruc­tor will pro­vide fur­ther infor­ma­tion in class. If you have ques­tions about this, please speak with your instruc­tor.

Stu­dents are per­mit­ted to use AI tools for all aspects of course­work such as a study tool, idea gen­er­a­tor, edi­tor, out­puts eval­u­at­ed and cre­ation of final prod­uct for sub­mis­sion. How­ev­er, stu­dents are ulti­mate­ly account­able for the work they sub­mit. Any con­tent gen­er­at­ed or sup­port­ed by an arti­fi­cial intel­li­gence tool must be eval­u­at­ed for accu­ra­cy and cit­ed appro­pri­ate­ly. Your instruc­tor will pro­vide fur­ther infor­ma­tion in class. If you have ques­tions about this, please speak with your instruc­tor.

Exam­i­na­tion and rethink­ing of assess­ments is crit­i­cal Instruc­tors may be using assess­ments that are vul­ner­a­ble to stu­dents using AI in a dif­fer­ent lane from the one des­ig­nat­ed for the assess­ment. For exam­ple, using AI to co-author por­tions of an essay when AI has been pro­hib­it­ed from use for that assess­ment. This process, which can be done inde­pen­dent­ly or in col­lab­o­ra­tion with NIC’s Cen­tre for Teach­ing and Learn­ing Inno­va­tion (CTLI), helps deter­mine whether a redesign is nec­es­sary based on the poten­tial vul­ner­a­bil­i­ties of the assess­ment or the future skills stu­dents need to devel­op.

Instruc­tors can test their own assess­ments for vul­ner­a­bil­i­ty by try­ing to use AI to help com­plete them. Instruc­tors should be mind­ful that some tools may use uploaded con­tent (by stu­dents, by instruc­tors etc.) to train their AI sys­tems, and thus mak­ing the con­tent pub­licly acces­si­ble. If redesign­ing assess­ments is need­ed, it is rec­om­mend­ed to start small—focus on the assess­ment that pos­es the great­est chal­lenge or has the high­est impact. Start with an assess­ment redesign for one term, eval­u­ate its effec­tive­ness, and refine based on the expe­ri­ence.

If instruc­tors pro­vide per­mis­sion for AI use in course­work, they should also ensure that stu­dents know how to appro­pri­ate­ly acknowl­edge use of these tools. See NIC’s Library AI guide. You may also choose to have stu­dents pro­vide an appen­dix to their work show­ing prompts and out­puts. If stu­dents are not sure whether and how to acknowl­edge AI use in their aca­d­e­m­ic work, they should check with their instruc­tors.

NIC strong­ly advis­es against the use of AI detec­tion tools on stu­dent work due to sev­er­al legal, ped­a­gog­i­cal, and prac­ti­cal con­cerns. Instruc­tors must not upload stu­dent aca­d­e­m­ic work or per­son­al infor­ma­tion to AI detec­tors that have not under­gone a Pri­va­cy Impact Assess­ment (PIA) and received for­mal approval for use at NIC. At this time, no AI detec­tion tools have been approved or are under­go­ing a PIA at NIC.

Upload­ing stu­dents’ per­son­al infor­ma­tion to unap­proved ser­vices may vio­late the Free­dom of Infor­ma­tion and Pro­tec­tion of Pri­va­cy Act (FIPPA). Addi­tion­al­ly, such actions could breach the Copy­right Act, as stu­dents retain copy­right own­er­ship of their work.

  • Accu­ra­cy and Reli­a­bil­i­ty: AI detec­tion tools are often incon­sis­tent in iden­ti­fy­ing AI-gen­er­at­ed con­tent, with high rates of false pos­i­tives that can mis­tak­en­ly flag orig­i­nal stu­dent work. False accu­sa­tions can lead to undue stress, rep­u­ta­tion­al harm, and dis­putes that are dif­fi­cult to resolve due to the opaque nature of detec­tion algo­rithms.
  • Bias: These tools may dis­pro­por­tion­ate­ly flag work by non-native Eng­lish speak­ers, rais­ing equi­ty issues.
  • Eva­sion: Detec­tors can be eas­i­ly fooled, mak­ing their find­ings unre­li­able.
  • Rapid Advance­ment: AI tech­nol­o­gy evolves quick­ly, and detec­tion tools strug­gle to keep pace.
  • Trans­paren­cy: Most tools lack the abil­i­ty to explain how or why con­tent is flagged as AI-gen­er­at­ed, leav­ing stu­dents with lit­tle recourse to chal­lenge incor­rect assess­ments.

Cur­rent­ly, NIC does not plan to pur­chase or sup­port AI detec­tion tools insti­tu­tion­al­ly, align­ing with prac­tices at many oth­er post-sec­ondary insti­tu­tions. Fac­ul­ty are encour­aged to explore alter­na­tive approach­es, such as design­ing assess­ments that empha­size process and orig­i­nal­i­ty, to uphold aca­d­e­m­ic integri­ty in their cours­es.

Because NIC has not yet com­plet­ed or approved any Pri­va­cy Impact Assess­ments (PIAs) for GenAI tools, NIC instruc­tors can­not require stu­dents to cre­ate accounts with GenAI tools or use GenAI tools that may col­lect their per­son­al infor­ma­tion, whether through stu­dent or instruc­tor inputs.

This act is provin­cial leg­is­la­tion that con­cerns the public’s right to access infor­ma­tion held by pub­lic bod­ies, and the pro­tec­tion of indi­vid­u­als’ pri­va­cy. FIPPA pro­vides the autho­riza­tion for how pub­lic bod­ies may col­lect, use, and dis­close per­son­al infor­ma­tion. As a law, it is ille­gal for a pub­lic body to col­lect, use, and dis­close per­son­al infor­ma­tion in a way that is not autho­rized by FIPPA.

Pri­va­cy Impact Assess­ments (PIAs) are a leg­isla­tive require­ment of FIPPA for any ini­tia­tive that involves the col­lec­tion, use, and dis­clo­sure of per­son­al infor­ma­tion. PIAs assess the tool’s com­pli­ance with FIPPA and eval­u­ate any pri­va­cy and secu­ri­ty risks.

Per­son­al infor­ma­tion (PI) refers to record­ed infor­ma­tion about an iden­ti­fi­able indi­vid­ual, exclud­ing con­tact infor­ma­tion used for busi­ness pur­pos­es. NIC is com­mit­ted to pro­tect­ing the pri­va­cy of all fac­ul­ty, staff, and stu­dents and is required to act in accor­dance with the Free­dom of Infor­ma­tion and Pro­tec­tion of Pri­va­cy Act (FIPPA). Exam­ples of PI include stu­dents’ names, per­son­al con­tact infor­ma­tion, aca­d­e­m­ic his­to­ry, stu­dent num­bers, and finan­cial infor­ma­tion. Addi­tion­al­ly, stu­dent assign­ments may con­tain PI about their lived expe­ri­ences and may also con­sti­tute their intel­lec­tu­al prop­er­ty.

Col­lect­ing Per­son­al Infor­ma­tion Under FIPPA

The most com­mon rea­son for col­lect­ing PI under FIPPA is when the infor­ma­tion is nec­es­sary and direct­ly relat­ed to a pro­gram or activ­i­ty of the pub­lic body. If NIC col­lects PI from students—whether direct­ly or through third-par­ty tools—for assign­ments or activ­i­ties nec­es­sary to achieve the learn­ing out­comes of a course, NIC is respon­si­ble for ensur­ing that all poten­tial col­lec­tion, use, and dis­clo­sure of PI com­plies with FIPPA.

If stu­dents are required to use tools or ser­vices that col­lect PI, these tools must have under­gone an approved Pri­va­cy Impact Assess­ment (PIA) to con­firm their com­pli­ance with FIPPA. With­out an approved PIA, NIC can­not guar­an­tee com­pli­ance and there­fore can­not man­date stu­dents to use such tools.

What Per­son­al Infor­ma­tion Does GenAI Col­lect?

AI tools and ser­vices often col­lect PI from users. At a min­i­mum, account cre­ation requires enough data to asso­ciate an indi­vid­ual with their account, such as name and email address. Depend­ing on the tool and pay­ment mod­el, addi­tion­al demo­graph­ic data and pay­ment infor­ma­tion may also be col­lect­ed.

Even if a user account is not required, AI tools may col­lect oth­er data, depend­ing on their terms of ser­vice. Exam­ples of col­lect­ed data include:

  • Log Data: IP address, date/time of use, brows­er set­tings.
  • Usage Data: Coun­try, time zone, con­tent requested/produced.
  • Device Data: Infor­ma­tion about the user’s device.
  • Ses­sion Data: Inter­ac­tions dur­ing the use of the tool.

Any per­son­al infor­ma­tion vol­un­tar­i­ly entered into a AI tool may also be col­lect­ed. This data may be stored, used for fur­ther train­ing of the AI mod­el, or even sold to third par­ties for mar­ket­ing or oth­er pur­pos­es. Addi­tion­al­ly, much of this data is often stored out­side Cana­da, rais­ing fur­ther pri­va­cy con­cerns.

NIC fac­ul­ty and staff are encour­aged to ensure any AI tools used in learn­ing activ­i­ties com­ply with FIPPA require­ments to uphold stu­dent pri­va­cy and intel­lec­tu­al prop­er­ty rights.

AI tools rely on con­tent drawn from exten­sive datasets used dur­ing their train­ing. Many resources in these datasets may be pro­tect­ed by copy­right and may not have been shared with or approved for use by the AI tool. This can result in out­puts that infringe on copy­right. For exam­ple, an AI tool may gen­er­ate con­tent derived from a jour­nal arti­cle that was uploaded by a user with­out the prop­er per­mis­sions. Such out­puts could con­sti­tute copy­right infringe­ment of the orig­i­nal source mate­r­i­al.

Copyright Infringement and Fair Dealing

The rela­tion­ship between copy­right law and fair deal­ing in the con­text of AI remains unclear. Addi­tion­al­ly, Cana­di­an copy­right law cur­rent­ly states that copy­right applies only to works cre­at­ed by humans. This rais­es ques­tions about who owns the copy­right of mate­ri­als gen­er­at­ed by AI. Giv­en the vary­ing degrees of human input involved in using AI tools, the deter­mi­na­tion of author­ship and own­er­ship for these works is yet to be clar­i­fied under Cana­di­an law.

Edu­ca­tors and stu­dents should be cau­tious when inputting their own intel­lec­tu­al prop­er­ty, such as teach­ing mate­ri­als or aca­d­e­m­ic work, into AI tools. Data or con­tent entered into these tools may be used for fur­ther train­ing of the AI sys­tem or could be shared beyond the user’s con­trol.

Upload­ing third-par­ty mate­ri­als, such as jour­nal arti­cles, text­books, or teach­ing resources, into AI tools with­out prop­er autho­riza­tion may con­sti­tute copy­right infringe­ment. To avoid this, ensure that:

  • You have explic­it per­mis­sion from the copy­right hold­er, or
  • The use of the mate­r­i­al qual­i­fies under Canada’s Fair Deal­ing pro­vi­sions.
Implications for Open Educational Resources (OER)

Using GenAI to cre­ate Open Edu­ca­tion­al Resources (OER) intro­duces addi­tion­al com­plex­i­ties regard­ing copy­right and intel­lec­tu­al prop­er­ty. These impli­ca­tions are still being explored and deter­mined.

For more infor­ma­tion on AI and copy­right issues, NIC fac­ul­ty, staff, and stu­dents are encour­aged to con­sult insti­tu­tion­al resources or reach out to the NIC Library for guid­ance.

The use of AI tools in teach­ing and learn­ing opens new pos­si­bil­i­ties but also rais­es sig­nif­i­cant eth­i­cal con­sid­er­a­tions. Aware­ness of these fac­tors is cru­cial to ensure that AI is used respon­si­bly and in align­ment with NIC’s val­ues. Under­stand­ing these risks also sup­ports the devel­op­ment of AI lit­er­a­cy, help­ing edu­ca­tors and stu­dents make informed deci­sions about when and how to use these tools.

Bias and Dis­crim­i­na­tion:
AI tools gen­er­ate con­tent based on exten­sive datasets, which often include soci­etal bias­es (e.g., racism, sex­ism, ableism). These bias­es can be reflect­ed in the tool’s out­puts, per­pet­u­at­ing inequal­i­ties and dis­crim­i­na­tion. This issue has been doc­u­ment­ed in both text and image gen­er­a­tion.

Data Col­lec­tion:
AI tools require large amounts of data to func­tion, includ­ing poten­tial­ly sen­si­tive per­son­al or copy­right­ed infor­ma­tion. There is a risk of mis­use, data breach­es, or unau­tho­rized data shar­ing.




Lack of Human Inter­ac­tion:
While AI can enhance per­son­al­ized learn­ing, it can­not replace mean­ing­ful human inter­ac­tions, which are essen­tial for social and emo­tion­al devel­op­ment in edu­ca­tion­al set­tings.



Uneth­i­cal Labor Prac­tices:
The devel­op­ment of AI tools often relies on low-paid labor, par­tic­u­lar­ly in the Glob­al South, to train mod­els and mod­er­ate con­tent, rais­ing con­cerns about exploita­tion.

Con­stant­ly Chang­ing Poli­cies:
AI plat­forms fre­quent­ly update their terms of ser­vice, pri­va­cy poli­cies, and intel­lec­tu­al prop­er­ty rules, requir­ing users to stay informed about these changes.


Hal­lu­ci­na­tions and Unre­li­able Con­tent:
AI mod­els gen­er­ate con­tent pre­dic­tive­ly rather than accu­rate­ly. This means they can pro­duce fab­ri­cat­ed or inac­cu­rate infor­ma­tion, pos­ing risks when such out­puts are mis­tak­en for truth­ful or reli­able.


Indige­nous Knowl­edge and Rela­tion­ships:
AI tools may pose risks to Indige­nous data sov­er­eign­ty and cul­tur­al pro­to­cols, includ­ing cul­tur­al appro­pri­a­tion and per­pet­u­at­ing stereo­types. They may not respect the First Nations Prin­ci­ples of OCAP® (Own­er­ship, Con­trol, Access, and Pos­ses­sion) or Indige­nous intel­lec­tu­al prop­er­ty rights.

Envi­ron­men­tal Impact:
Train­ing and oper­at­ing AI mod­els con­sume sig­nif­i­cant resources, includ­ing elec­tric­i­ty and water. For exam­ple, gen­er­at­ing a sin­gle image may use as much ener­gy as charg­ing a smart­phone, while an AI search can con­sume 4–5 times the ener­gy of a tra­di­tion­al web search.

Pri­va­cy Inva­sion Through
Re-Iden­ti­fi­ca­tion:
AI can re-iden­ti­fy indi­vid­u­als in anonymized data, poten­tial­ly lead­ing to pri­va­cy vio­la­tions.



Equi­ty in Access:
Access to AI tools depends on tech­nol­o­gy, reli­able inter­net, and dig­i­tal lit­er­a­cy skills. Bar­ri­ers such as geo­graph­ic loca­tion, costs, or acces­si­bil­i­ty chal­lenges can lim­it equi­table use, par­tic­u­lar­ly for users with dis­abil­i­ties.


Mis­use of Gen­er­at­ed Con­tent:
AI can be exploit­ed to cre­ate fake news, deep­fakes, or imper­son­ations, lead­ing to eth­i­cal con­cerns around dis­in­for­ma­tion and mis­use.






Own­er­ship and Con­trol of Gen­er­at­ed Con­tent:
Deter­min­ing own­er­ship of AI-gen­er­at­ed con­tent remains a gray area, rais­ing ques­tions about intel­lec­tu­al prop­er­ty rights and usage.




Risk to Crit­i­cal Think­ing and Cre­ativ­i­ty:
Over-reliance on AI for tasks like read­ing, writ­ing, or idea gen­er­a­tion may under­mine crit­i­cal think­ing and cre­ativ­i­ty.


Despite the con­sid­er­a­tions above, GenAI tools offer sig­nif­i­cant poten­tial to enhance teach­ing and learn­ing.

Per­son­al­ized Learn­ing:
AI can adapt learn­ing expe­ri­ences to indi­vid­ual needs, styles, and inter­ests, offer­ing a tai­lored edu­ca­tion­al expe­ri­ence.





Enhanced Pro­duc­tiv­i­ty:
These tools stream­line cur­ricu­lum devel­op­ment, con­tent cre­ation, and research, free­ing up edu­ca­tors to focus on stu­dent engage­ment and rela­tion­ship-build­ing.

Acces­si­bil­i­ty:
AI sup­ports acces­si­bil­i­ty by enabling fea­tures like text-to-speech, speech-to-text, auto­mat­ic cap­tions, alt-text gen­er­a­tion, trans­la­tion, and audio descrip­tions. It can also enhance Uni­ver­sal Design for Learn­ing (UDL) and sup­port neu­ro­di­verse learn­ers.

Democ­ra­ti­za­tion of Knowl­edge:
By reduc­ing the need for advanced tech­ni­cal skills, AI pro­vides more equi­table access to infor­ma­tion and cre­ative tools.

Cre­ativ­i­ty:
AI enables rapid gen­er­a­tion of cre­ative and orig­i­nal con­tent, encour­ag­ing explo­ration and inno­va­tion in learn­ing activ­i­ties.





Improved Deci­sion-Mak­ing:
AI’s abil­i­ty to ana­lyze data and detect pat­terns can aid in diag­no­sis, pre­dic­tions, and the design of effec­tive learn­ing strate­gies.


Guid­ance and Prin­ci­ples
Teach­ing with AI