Você está na página 1de 3

Palindrome import java.util.Scanner; import java.io.

File; public class PalindromeExample { static int counter; boolean result = false; public static boolean isPalindrome (String s) { int low, high; low = 0; high = s.length()-1; while (low<high) { if (s.charAt(low) != s.charAt(high)) { return false; } low++; high--; } counter++; return true; } public static void main(String args[]) throws Exception { File infile = new File ("C:/Users/MARKJIM/Desktop/pogi.txt"); Scanner input = new Scanner( infile ); while (input.hasNext()) { String word = input.nextLine(); if (isPalindrome(word)) { System.out.printf("Found a palindrome: %s\n", word); } } System.out.println("Number of Palindromes in the file: "+counter); input.close(); } }

Research about Relationship lexical in Finite Automata Can be used a model for what happens during lexical analysis | scan program from beginning to end and divide it into tokens. Finite automata are used to specify tokens of programming languages. Also used in \model checking", reasoning about systems with objective of proving they satisfy useful properties. Also used in statistical models for analyzing biological and textual sequences.

A finite automaton is an abstract machine that serves as a recognizer for the strings that comprise a regular language. The idea is that we can feed an input string into a finite automaton, and it will answer "yes" or "no" depending on whether or not the input string belongs to the language that the automaton recognizes. Exactly one state of a finite automaton is designated as the start state. The start state is often represented by marking it with a minus ("-") symbol. JFLAP denotes the start state by marking it with a triangle. (JFLAP is a software package for experimenting with finite automata and other mechanisms for specifying formal languages.)
Lexical analysis is the process of reading the source text of a program and converting it into a sequence of tokens. Since the lexical structure of more or less every programming language can be specified by a regular language, a common way to implement a lexical analyzer is to 1. Specify regular expressions for all of the kinds of tokens in the language. The disjunction of all of the regular expressions thus describes any possible token in the language. 2. Convert the overall regular expression specifying all possible tokens into a deterministic finite automaton (DFA). 3. Translate the DFA into a program that simulates the DFA. This program is the lexical analyzer.

Lexical analyzer generators translate regular expressions (the lexical analyzer definition) into finite automata (the lexical analyzer).

For example, a lexical analyzer definition may specify a number of regular expressions describing different lexical forms (integer, string, identifier, comment, etc.). The lexical analyzer generator would then translate that definition into a program module that can use the deterministic finite automata to analyze text and split it into lexemes (tokens).

Você também pode gostar