Sign for Notice Everyday    Sign Up| Sign In| Link| English|

Our Sponsors

    Receive Latest News

    Feedburner
    Share Us


    IEEE AITEST 2023 - The 5th IEEE International Conference on Artificial Intelligence Testing

    View: 218

    Website https://ieeeaitest.com/ | Want to Edit it Edit Freely

    Category TEST; Artificial Intelligence

    Deadline: February 19, 2023 | Date: July 17, 2023-July 19, 2023

    Venue/Country: Harokopio University of Athens, Greece

    Updated: 2023-02-12 00:51:54 (GMT+9)

    Call For Papers - CFP

    The 5th IEEE International Conference on Artificial Intelligence Testing (AITest 2023)

    *https://easychair.org/conferences/?conf=aitest2023

    *https://ieeeaitest.com

    *https://www.facebook.com/IEEEAITest/

    *JULY 17-20, 2023 | ATHENS, GREECE

    *All accepted papers will be published by IEEE Computer Society Press (EI-Index) and included in the IEEE Digital Library.

    Artificial Intelligence (AI) technologies are widely used in computer applications to perform tasks such as monitoring, forecasting, recommending, prediction, and statistical reporting. They are deployed in a variety of systems including driverless vehicles, robot-controlled warehouses, financial forecasting applications, and security enforcement and are increasingly integrated with cloud/fog/edge computing, big data analytics, robotics, Internet-of-Things, mobile computing, smart cities, smart homes, intelligent healthcare, etc. Despite this dramatic progress, the quality assurance of existing AI application development processes is still far from satisfactory and the demand for being able to show demonstrable levels of confidence in such systems is growing. Software testing is a fundamental, effective, and recognized quality assurance method which has shown its cost-effectiveness to ensure the reliability of many complex software systems. However, the adaptation of software testing to the peculiarities of AI applications remains largely unexplored and needs extensive research to be performed. On the other hand, the availability of AI technologies provides an exciting opportunity to improve existing software testing processes, and recent years have shown that machine learning, data mining, knowledge representation, constraint optimization, planning, scheduling, multi-agent systems, etc. have real potential to positively impact software testing. Recent years have seen a rapid growth of interests in testing AI applications as well as application of AI techniques to software testing. This conference provides an international forum for researchers and practitioners to exchange novel research results, to articulate the problems and challenges from practices, to deepen our understanding of the subject area with new theories, methodologies, techniques, processes models, etc., and to improve the practices with new tools and resources.

    Topics Of Interest: The conference invites papers of original research on AI testing and reports of the best practices in the industry as well as the challenges in practice and research. Topics of interest include (but are not limited to) the following:

    • Testing AI applications

    • Methodologies for testing, verification, and validation of AI applications

    • Techniques for testing AI applications

    • Tools and environment for automated and semi-automated software testing AI applications for various testing activities and management of testing resources

    • Specific concerns of software testing with various specific types of AI technologies and AI applications

    • Applications of AI techniques to software testing

    • Machine learning applications to software testing, such as test case generation, test effectiveness prediction and optimization, test adequacy improvement, test cost reduction, etc.

    • Constraint Programming for test case generation and test suite reduction

    • Constraint Scheduling and Optimization for test case prioritization and test execution scheduling

    • Crowdsourcing and swarm intelligence in software testing

    • Genetic algorithms, search-based techniques and heuristics to optimization of testing

    • Data quality evaluation for AI applications

    • Automatic data validation tools

    • Quality assurance for unstructured training data

    • Large-scale unstructured data quality certification

    • Techniques for testing deep neural network learning, reinforcement learning and graph learning

    Important Dates

    • February 12th, 2023: Abstract submission deadline

    • February 19th, 2023: Submission Deadline

    • April 24th, 2023: Author notification

    • May 15th, 2023: Final paper submission (camera-ready) and conference registration

    • July 17 to July 20: Conference dates

    Paper Submission

    • Regular papers (8 pages IEEE double column format) and short papers (2 pages IEEE double column format).

    We welcome submissions of both regular research papers (limited to 8 pages), that describe original and significant work or reports on case studies and empirical research, and short papers (limited to 2 pages) that describe late breaking research results or work in progress with timely and innovative ideas.

    • AI Testing in Practice (8 pages IEEE double column format). The AI Testing in Practice Track provides a forum for networking, exchanging ideas and innovative or experimental practices to address SE research that impacts directly on practice on software testing for AI.

    • Tool Demo Track (4 pages IEEE double column format). The tool track provides a forum to present and demonstrate innovative tools and/or new benchmarking datasets in the context of software testing for AI.

    Submission Guideline : All papers must be written in English. Manuscripts must include a title, an abstract, and a list of 4-6 keywords. All papers must be prepared in the IEEE double column proceedings format. Please see: https://www.ieee.org/conferences/publishing/templates.html

    IEEE AITest 2023 uses a double-blind review policy. Authors are required to remove their names, affiliation(s) and other identifying information from the header of the manuscript. Authors are required to cite their previous work in a neutral manner, for example, avoid “in our previous work [3]” and instead use “as shown in [3]”. Papers that do not meet these anonymization requirements may be desk-rejected without further review. All submitted papers will be peer-reviewed. The name(s) of the author(s) will not be visible to the reviewers of a paper. Authors should report any conflict of interest with the list of PC members during submission of the manuscript, in which case the PC Chairs will exclude the corresponding PC member(s) from reviewing the paper. At least one of the authors of any accepted paper would have to register for the conference and confirm that she/he will present the paper in person.

    Authors must submit their manuscripts via the following link by February 19th, 2023, 23:59 AoE at the latest: https://easychair.org/conferences/?conf=aitest2023. For more information, please visit the conference website at: https://ieeeaitest.com. https://www.facebook.com/IEEEAITest/

    Paper Publication: The best papers will be invited to submit an extended version (with at least 30% novel contents) to the selected special issues (TBA).

    On Behalf of the IEEE AITest'23 Conference Organizing Committee

    Pr. My El Hassan Charaf (Publicity Chair)

    Faculty of Sciences- Ibn Tofail University

    Kenitra-Morocco


    Keywords: Accepted papers list. Acceptance Rate. EI Compendex. Engineering Index. ISTP index. ISI index. Impact Factor.
    Disclaimer: ourGlocal is an open academical resource system, which anyone can edit or update. Usually, journal information updated by us, journal managers or others. So the information is old or wrong now. Specially, impact factor is changing every year. Even it was correct when updated, it may have been changed now. So please go to Thomson Reuters to confirm latest value about Journal impact factor.